#aws

:aws: Discussion related to Amazon Web Services (AWS) Archive: https://archive.sweetops.com/aws/

2019-10-20

Daniel Minella

Hello everyone, how do you manage ecs container logs? Example, now when a container dies I have to connect at EC2 that hosts ECS service and execute a docker logs xxxx. Which stack or strategy do you use to handle that?

Steven

You can send the logs to Cloudwatch on ecs/ec2 & fargate. On ec2, there are more logging options

Vlad Ionescu

Usually they go to CloudWatch and then either to 1) external log system, 2) Lambda function that triggers some action if something happened, or 3) nowhere cause they are fully ignored( observability in the application, tracing with exhaustive context sent to other systems so a bit similar to option 1)

Vlad Ionescu

Depends on the app and the company and the usecase

2019-10-18

2019-10-17

IvanM

Guys I need a bit of help with AWS networking Issue - we have RDS instances running in private subnets in VPC. From our office network we want to be able to always ssh into these instances (without client VPN). How should we do that? I guess we need a site-to-site VPN connection from our local network to the VPC. However how to enable traffic only to the rds instances. I do not want all the Internet network to go via the VPC/VPN So the local network should still have internet connection as is, only that there should be a direct connection possible to the RDS instances in VPC only

Ognen Mitev

As far as I know you cannot directly SSH to RDS instance eq. ssh [email protected]

You can use:

  1. Bastion host and from there - mysql -h mysql–http://instance1.123456789012.us-east-1.rds.amazonaws.com -P 3306 -u mymasteruser -p
  2. Public access to the RDS filtered to your IPs… But you will need MySQL Client (workbench or sqlpro…)
  3. Maybe someone else knows anything else

I would do it via case 1. There are options how to do the Bastion host and so on…

Taras

+ for case 1. There is no technical possibility(implementation) to SSH into RDS’s servers/instances. Only connect to DB using DB-client like mysql etc.

IvanM

sorry, my bad Yeah what I meant is to be abel to connect to rds using a client Issue is that the RDS is in a private subnet and not accessible via Internet

Ognen Mitev

Bastion aka Jump Host will do the thing

Maciek Strömich

what’s the easiest way to clean up an aws account prior to account deletion?

oscar

AWS nuke

oscar
gruntwork-io/cloud-nuke

A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke

rebuy-de/aws-nuke

Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke

Maciek Strömich

last time i’ve checked nuke wasn’t supporting all of the services

oscar
1Strategy/automated-aws-multi-account-cleanup

Automatically clean-up multiple AWS Accounts on a schedule - 1Strategy/automated-aws-multi-account-cleanup

Maciek Strömich

but maybe i need to recheck

oscar

Not personally used it, just no of it. Can’t comment

Maciek Strömich

sure. i will try it on one of the test accounts which will be deleted

Maciek Strömich

thanks for a tip

Maciek Strömich
Now Available – Amazon Relational Database Service (RDS) on VMware | Amazon Web Services

Last year I told you that we were working to give you , with the goal of bringing many of the benefits of to your on-premises virtualized environments. These benefits include the ability to provision new on-premises databases in minutes, make backups, and restore to a point in time. You get automated management of your […]

kskewes

I see AWS recommend VPC’s of /16 or smaller. Given a /16 is split into further subnets (at least by AZ but potentially further, eg: different app ASG’s, k8s, etc), I’m curious where the /16 recommendation comes from. Any ideas? Hard follow or ignore? https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-sizing-ipv4 EDIT: The allowed block size is between a /28 netmask and /16 netmask.

Alex Siegman

So, the /16 is a technical limitation imposed by AWS on VPC size. Where the recommendation comes from, I don’t know, but giving your VPC the largest space possible allows you the most flexibility when it comes to making subnets and such.

Alex Siegman

Especially if you’re using EKS, it EATS IP address space fast as every pod is given an IP

kskewes

Yeah, thanks Alex, Erik. Have been able to sort out our IPAM for AWS & EKS now.

2019-10-16

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 18 services in the AWS US East/West and AWS GovCloud (US) Regions | Amazon Web Services

It’s my pleasure to announce that we’ve expanded the number of AWS services that customers can use to run sensitive and highly regulated workloads in the federal government space. This expansion of our FedRAMP program marks a 28.6% increase in our number of FedRAMP authorizations. Today, we’ve achieved FedRAMP authorizations for 6 services in our […]

2019-10-15

Erik Osterman
Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database | Amazon Web Services

Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they […]

Alex Siegman

imagine being the head project manager on this massive multi-year multi-team migration and closing that last ticket as this is posted. that’s gotta feel good.

Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database | Amazon Web Services

Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they […]

2019-10-14

Milos Backonja

Thanks @oscar. I was able to configure this using Route53 Resolver

1
oscar

Nice one

Milos Backonja

definitely awesome stuff. It will allow me to connect on-prem solutions with AWS later on, and to use on-prem private dns server

2019-10-12

oscar

Route 53 resolver I believe

1

2019-10-11

mmarseglia

@Phuc i sort of do this with Elasticbeanstalk. I use the module https://github.com/cloudposse/terraform-aws-ecr.git and pass the elasticbeanstalk roles that get created.

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Milos Backonja

Hi, If I use transit gw with multiple vpcs attached. Each VPC use its own private DNS zone in Route53. Traffic is working between VPC’s, but is there way to somehow delegate DNS resolving between VPCs?

2019-10-10

Fernando Torresan

I had been facing a problem when I tried to provision a cloudfront (estimate time 18 min) and using aws-vault to work properly I’ve needed to set this flag --assume-role-ttl=1h, like:

aws-vault exec <profile-name> --assume-role-ttl=1h

Gowiem

Hey folks – IAM Policy questions: What’s the standard operating procedure for dev teams on AWS and requiring MFA? I’ve created a policy to require MFA for all actions so users need to assign an MFA on first login and then on subsequent logins they need to provide MFA before they can do anything in the console, which is what I want. My problem with this is that I can’t distinguish between requiring MFA for console usage vs CLI usage. I’d like to empower devs to push to ECR or use certain CLI functionality without having them put their MFA in every morning.

I have a way to add IAM actions the user is allowed to do via the following policy statement:

{
            "Sid": "DenyAllExceptListedIfNoMFA",
            "Effect": "Deny",
            "NotAction": [
                // Bunch of management actions the user is allowed to do.
            ],
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws<i class="em em-MultiFactorAuthPresent""></i> "false"
                }
            }
        }

Should I just push all my CLI allowed actions into that NotAction and manage it that way? Or is there a better way?

Alex Siegman

I recommend having the ability to change password and manage own MFA available by default, and everything else locked behind having MFA present. Providing all access through assumed roles, that means you only have to lock down role assumption, and the only thing an IAM user is allowed to do is manage their MFA and login

Alex Siegman

That said, the second half of your statement, (allowing certain actions via MFA) could easily just be added as allows, since everything is an implicit deny. To my brain “allowing if” is simpler than “denying unless…”

Alex Siegman

I don’t think there is a way in IAM to distinguish between API and console access, so you’d have to be okay with it being available in both places without MFA. I mean maybe you could do something with aws:UserAgent but those are spoofable

1
Erik Osterman

Also aws-vault in server mode can help reduce the frequency of entering MFA code

Erik Osterman

Is every 12 hours really such a bad thing? :-)

Gowiem

Got it — Thanks gents. Think I need to do some reading on role assumptions + aws-vault, but overall I think I’ll move forward with supplying explicit allows for things I don’t want to hinder this dev team with and I’ll try to just keep that list short.

Gowiem


Is every 12 hours really such a bad thing? :-)
Haha I personally don’t think so… but since I’m consulting for a dev agency who is more cavalier about security I just don’t want to rub them the wrong way.

Hi guys, Is there anyone familiar with IAM role and Instance Profile ?> I have a case like this: I would like to create an Instance Profile with suitable policy to allow access to ECR repo ( include download image from ECR as well). Then I attach that Instance Profile for a Launch Configuration to spin up an instance. The reason why I mentioned Policy for ECR is that I would like to set aws-credential- helper on the instance to use with Docker (Cred-helper). when it launch, so that when that instance want to pull image from ECR, it wont need AWS credential on the host itself at first. All of that module, I would like to put in Terraform format as well. Any help would be appreciated so much.

2019-10-09

Does anyone know the default duration of the session when using aws-vault?

1h I think

but that is not on aws-vault side

is aws side

and you can change that with a policy

oscar
99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

2019-10-01

Yusley Orea

Anyone have any recommendations for a tool for disaster recovery on AWS? especially for Aurora, DynamoDB and EBS.

a tool ? as like something that could do what ?

Yusley Orea

centralized cross regions backup for services or at least for RDS. I look for https://aws.amazon.com/backup but it’s region dependency.

AWS Backup | Centralized Cloud Backup

AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.

I will just use global aurora cluster and backup the main region

or you can have read replica cluster in other regions/accounts or same account

so instead of having snapshots you have replicas that can promote in very little time

1
1

2019-09-26

@IvanM I don’t think that’s possible

1

2019-09-25

Nikola Velkovski

Hi people, anyone having issues with lambdas and setting reserved-concurrent-executions ? it seems that it doesn’t work but there’s no mention on any outages on AWS

Nikola Velkovski
Nikola Velkovski

they’ve just published it

IvanM

Guys don’t you pls know whether it’s possible to extract host parts in AWS ALB listeners custom redirection?

My use case is that I will have a host <http://foo.bar.com> and inside listener redirect rule I want to extract foo from #{host}

2019-09-20

rohit

Does anyone know if there is a way to copy all version of S3 objects from bucket to another S3 bucket ?

@rohit have you tried it with the awscli ? aws s3 sync

rohit

AFAIK aws s3 sync does not copy all the versions

AgustínGonzalezNicolini

according to stackoverflow

AgustínGonzalezNicolini
Copy S3 Bucket including versions

Is there a way to copy an S3 bucket including the versions of objects? I read that a way to copy a bucket is by using the command line tool with aws s3 sync s3://<source> s3://<dest>

AgustínGonzalezNicolini

There is no direct way to do so . But you can do it via AWS Copy-Object API ref https://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html

Iterate on every version object, capture the object-id and copy to destination bucket.

Hope this helps

the version ID’s will get lost but where have you read that it does not copy all versions ?

rohit

i tried the sync command before posting my question and it did not copy all the versions

That’s interesting @rohit , versions missing, or was it only copying exactly one version ?

rohit

yes it is only copying one version

rohit

maybe the latest version

and have you set versioning to enabled on the destination bucket ?

probably, otherwise you wouldn’t see different versions.

rohit

correct

2019-09-19

pablo

Hi, anyone know whats the minimum set of permissions needed to run an EMR “jobflow”? This action creates an emr cluster, executes whatever, then terminates the cluster. I’m setting this up from an ec2 instance running Airflow, and I’m reluctant to give it full admin access

davidvasandani

@pablo you could always give it admin access, run it once, and then check CloudTrail to see what IAM actions it used.

2019-09-16

dalekurt

Has anyone seen or used AWS Landing Zone solution?

oscar

What is the recommended EBS size for Kafka?

Erik Osterman

Can that be generalized? I think it relates directly to the amount of data you will be storing and the retention

Erik Osterman

The number replicas you will have

vluck

Hiya! Has anyone gotten RDS clusters to work with micro size? (db.t2.micro or db.t3.micro)? Although it’s listed in the documentation, I keep getting

InvalidParameterCombination: RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.t3.micro, Engine=aurora-postgresql, EngineVersion=10.7,
aknysh
Aurora Support for db.t3

Aurora MySQL supports the db.t3.medium and db.t3.small instance classes for Aurora MySQL 1.15 and higher, and all Aurora MySQL 2.x versions. These instance classes are available for Aurora MySQL in all Aurora regions except AWS GovCloud (US-West), AWS GovCloud (US-East), and China (Beijing).

Aurora PostgreSQL supports only the db.t3.medium instance class for versions compatible with PostgreSQL 10.7 or later. These instance classes are available for Aurora PostgreSQL in all Aurora regions except China (Ningxia).
vluck


supports only the db.t3.medium instance class

vluck

wow

vluck

then it goes on to say

vluck


These instance classes are available for Aurora PostgreSQL

vluck

like, it should say “instance class”, not plural. There is one.

aknysh

db.r4.large is the smallest instance type supported by Aurora Postgres 9.6 Postgres 10.6 and later can use db.r5.large Postgres 10.7 and later can use db.t3.medium

vluck

thanks @aknysh

2019-09-13

2019-09-12

Abel Luck

since july the session manager should support ssh tunnels

Abel Luck

has anyone actually got it working?

asmito

for me no, please share your experience with it, when you experiment it

Abel Luck

well first, i discovered that the ssm-agent wasn’t up to date on my ubuntu and debian hosts, because they don’t release the snap package to the stable channel often

Abel Luck

so if you’re running the ssm-agent via snap, then you need to switch to the candidates channel

Abel Luck
Snap packages update? · Issue #196 · aws/amazon-ssm-agent

Hello, Is this possible to get last version 2.3.672.0 in snap repository? Last published version is 2.3.662 : https://snapcraft.io/amazon-ssm-agent Thanks!

Abel Luck

that’ll get you the latest ssm-agent version

Abel Luck
Step 7: (Optional) Enable SSH Connections Through Session Manager - AWS Systems Manager

You can enable users in your AWS account to use the AWS CLI to establish Secure Shell (SSH) connections to instances using Session Manager. Users who connect using SSH can also copy files between their local machines and managed instances using Secure Copy Protocol (SCP). You can use this functionality to connect to instances without opening inbound ports or maintaining bastion hosts. You can also choose to explicity disable SSH connections to your instances through Session Manager.

Abel Luck

however they aren’t working for me, when I execute the ssh command I get the error

Abel Luck
debug1: ssh_exchange_identification: ----------ERROR-------

debug1: ssh_exchange_identification: Encountered error while initiating handshake. SessionType failed on client with status 2 error: Failed to process action SessionType: Unknown session type Port
asmito

thanks for sharing your experience will check it with you

Abel Luck

Got it working

Abel Luck

My local session manager plugin was out of date

Abel Luck

so make sure you update the ssm-agent on the ec2 instance, and also your local session manager plugin

Abel Luck

you need session manager plugin version 1.1.23.0 or later, and on the ec2 instance amazon-ssm-agent 2.3.672.0 or later

asmito

Cool, congrats then

Abel Luck

“SessionType failed on client” was the clue

asmito

Team, Question regarding Costs please :

  • i have created a CNAME record in cloudflare that points to an Internal Load balancer of Nginx, the record will be used just inside the vpc the same region different AZs, normally i will be charged for 0.01/GB of data transfer + load balancing costs, but using cloudflare, does the traffic will be out the vpc to cloudflare and in again ?

Resolving is not VPC traffic, but apart from that:

1 cent for approximately 2 billion dns requests is not something to worry about. If you would do that amount of http requests on a daily basis you have other costs to worry about .

Hi Guys, anyone deal with SSM module in terraform. we have a case that we want to use that TF module to read the content in the file and create a list of key-value pairs to put on AWS SSM. Kinda long list in this format: ``` VARA=”value” VARB=”value1” .

we are using it

what is your question ?

2019-09-10

hi

Anyone he experienced with setting up an health check between AWS and Aliyun?

2019-09-09

Brij S

for python lambda packages; do you need to include os and json in the deployment package?

antonbabenko

No, I don’t

Maciek Strömich

only if you use them. os gets handy when you want to read some settings from env variaables

myvar = os.getenv('SOME_VAR', 'default')
Maciek Strömich

json if you’re working with reading/writing json objects

2019-09-07

2019-09-05

joshmyers

Anyone seen it look like a CloudWatch rule triggered - i.e. you can see the rule metrics for “invoked” - where invoke invokes a Lambda function…but checking the lambda function that is invoked, doesn’t show any invocations or logs…? I can “test” the lambda function manually and it works fine…

Maciek Strömich

maybe cloudwatch events doesn’t have ability to trigger lambda?

Maciek Strömich

we had similar issues in the beginning while triggering lambdas behind api-gateway

joshmyers

Also seeing this for a different thing but similar issues whereby Cloudwatch alert > SNS > Lambda , it looks like the thing is working, but Lambda isn’t actually invoked and isn’t showing anything in the logs either… but enabling sns delivery notifications…

joshmyers
joshmyers

SNS thinks it successfully triggered that lambda function…

joshmyers

lambda function thinks otherwise…

joshmyers

Can’t see anything in Cloudtrail and this def has worked in the past..

Maciek Strömich

maybe lambda isn’t able to save logs ;D

joshmyers

It can when I invoke it manually with a test event

joshmyers

and the code hasn’t changed that does this, and it was working yesterday… >_<

joshmyers

Lambda doesn’t even look like it has been invoked looking at the metrics for it…

joshmyers

Also seeing 2 different AWS Elasticsearch clusters 504…while another is OK…all in eu-west-1

joshmyers

nvm, swamped ENIs

Brij S

has anyone used github actions with the awscli? The repo has the HCL syntax and im trying it with yml and running into some issues

davidvasandani

not I but share whatever you find as I plan on jumping on a similar project in the next week or two.

Erik Osterman

don’t use the HCL syntax

Erik Osterman

that’s already deprecated and will stop working very soon

Erik Osterman

use the YAML format

Erik Osterman

nvm, i see you’re trying the yaml

Erik Osterman

anyways, i can help take alook if you post something

Erik Osterman

fwiw, here are my notes https://github.com/cloudposse/build-harness/pull/165 (see description)

Add GitHub Actions by osterman · Pull Request #165 · cloudposse/build-harness

what Add action to automatically rebuild readme and push upstream Respond to /readme command. Note: issue_comment workflows use master workflow for security. See https://developer.github.com/actio

Erik Osterman

i found those links most helpful when working my first actions

Brij S

@Erik Osterman have you been able to get github actions to work when a PR is closed and/or merged?

Erik Osterman

That’s a good question! TBH I’ve only tried these simple examples.

Erik Osterman

(E.g. haven’t tried to deploy on merge with github actions)

Marcio Rodrigues

Hello, i’m curious about how you guys are doing disaster recover tasks in your company

Marcio Rodrigues

Do you guys fully automate it? Run via CI? Do heavy tasks with terraform? How is your plan

Marcio Rodrigues

ps: by limited money, i am tasked to not keep a full replicated environment in another AWS region, but i should have a plan to recreate my infrastructure in another region if needed

2019-09-04

Shannon Dunn

for anyone with using OU and SCPs with lots of accounts inside, how do you manage change of the SCPs and the OU structure it self

Shannon Dunn

is it wise to replicate the entirety of the OU structure into a dev,qa,prod, each with its own root etc…

Shannon Dunn

especially for hub and spoke modules, things like a logging,transit, shared services, would get their own dev/qa/prod as well

Shannon Dunn

or is anyone using IAC to manage SCPs and OU

curious deviant

Hey Shannon.. I just started a spike around AWS Control Tower (CT) for multi-account governance and one of the first questions that came to my mind was similar to yours. I think a lot depends on how we organize our OUs. So far I can only speak to the SCP change …How I think I’ll go about it is I’ll create 3 OU’s : DEV , QA and PROD say .. and SCPs changes would be rolled out first to DEV and then higher environments. So basically I would not replicate the OU structure but create separate OUs and move changes through them. This will give me an opportunity to identify and address any breaking changes. For my particular case (using CT), IAC is not an option since CT doesn’t expose any APIs yet..everything is pretty much manual/done by CT (logging bucket creation etc). I will be spending sometime on CT further.

curious deviant

Let me know if you have more questions/suggestions.. will help me plan my OU structure better

2019-09-03

Maciek Strömich
Announcing improved VPC networking for AWS Lambda functions | Amazon Web Services

We’re excited to announce a major improvement to how AWS Lambda functions work with your Amazon VPC networks. With today’s launch, you will see dramatic improvements to function startup performance and more efficient usage of elastic network interfaces. These improvements are rolling out to all existing and new VPC functions, at no additional cost. Roll […]

1
2
davidvasandani

This is ome!!

Announcing improved VPC networking for AWS Lambda functions | Amazon Web Services

We’re excited to announce a major improvement to how AWS Lambda functions work with your Amazon VPC networks. With today’s launch, you will see dramatic improvements to function startup performance and more efficient usage of elastic network interfaces. These improvements are rolling out to all existing and new VPC functions, at no additional cost. Roll […]

1
2
Alejandro Rivera

Is there a way to use wildcards on s3 bucket policies on Principals ? e.g.: we have:

arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:role/role-name123456789

would like to do something like:

arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:role/role-name*

@Alejandro Rivera Take a look at this: https://stackoverflow.com/a/56678945/10846194 Ref: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html You cannot use it as a wildcard to match part of a name or an ARN. We also strongly recommend that you do not use a wildcard in the Principal element in a role's trust policy unless you otherwise restrict access through a Condition element in the policy. Otherwise, any IAM user in any account can access the role.

Wildcard at end of principal for s3 bucket

I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create mo…

1
Alejandro Rivera

Thank you sir!

Wildcard at end of principal for s3 bucket

I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create mo…

1

2019-09-01

2019-08-31

Everyone enjoying us-east-1 fun? Happy labor day weekend!

1
1
1
Robert

If only there was more regions that we could use.

1
1
1

2019-08-30

Maciek Strömich

Apparently gp2 EBS docs aren’t as precise as one would thought.

Patient: 100GiB gp2 EBS volume in multi-az RDS cluster

I’m running a few M row update/delete process on one of our mysql clusters and based on docs it would mean that we would be able to burst over base performance of 300IOPS (3IOPS per GiB) for about 20minutes. Apparently in multi-az environments base performance is doubled and the time required to deplete the gathered (since yesterday late evening) credits allowed to burst with avg 1500IOPS for over 2h.

1
Maciek Strömich

the spike visible around the 8PM yesterday was a test performed on ~200k rows

Maciek Strömich

for the sake of data completeness this graph comes from db.m5.large cluster

Does anyone have recommendations for aws + okta cli tools?

  • <https i hate java and want to stay away from this if possible.
  • <https looks promising

Just curious if there was something you guys swear by

oktadeveloper/okta-aws-cli-assume-role

Okta AWS CLI Assume Role Tool. Contribute to oktadeveloper/okta-aws-cli-assume-role development by creating an account on GitHub.

segmentio/aws-okta

aws-vault like tool for Okta authentication. Contribute to segmentio/aws-okta development by creating an account on GitHub.

Erik Osterman

aws-okta is great

Erik Osterman

we were just talking about it

hahaha that’s awesome

Erik Osterman

very easy to setup and works very well in a container (how we run it with #geodesic)

1

sweeeeet, good to hear this feedback!

Okta seems pretty expensive. Why the buzz?

2019-08-28

Sharanya

Did anyone Come across NPM memory Issues ?

Erik Osterman

Perhaps share some more details of what you are seeing?

Sharanya

Upgrade Node and NPM on CI/CD server. Observe the npm memory issue.

Sharanya

m new to node…so just want to know where can I check them memory issues

aknysh

i suppose you need to upgrade nodejs and npm to the latest versions, then monitor the build server on CI/CD for memory consumption when it builds the node project with npm

2019-08-27

nutellinoit

Aurora postgres db seems down on eu-west-1 region

joshmyers

oof if so

nutellinoit

back up

nutellinoit

14 minutes down

Brij S

Hey all, looking for some opinions on how to go about creating VPC’s in a new aws account of mine. I recently setup an ECS cluster with fargate using the ‘get started’ feature in the console and it did a lot of the heavy lifting for me. however I’m trying to automate some of this using Terraform. So I’ll need to create some VPCs for the ECS cluster. What is the most simple, secure setup? One public subnet, private subnet, place the cluster in the private subnet with an ALB in the public subnet?

Maciek Strömich

setup it in a way that you can easily change it to multi-az (one subnet per az for every type of subnet - public, private, db). it doesn’t mean you will use all of them but if the requirements change you will have them already available

Brij S

can you give more detail?

Maciek Strömich

I’ve a vpc with a cird 10.0.0.0/8

Maciek Strömich

and then every subnet in every availability zone uses /24 from that cird

Maciek Strömich

i’ve a total of 8 subnets - public and private for every availability zone

Maciek Strömich

public have outgoing traffic routed via nat gateway

Maciek Strömich

private have only routing for the 10.0.0.0/8

Maciek Strömich

that makes most sense for my cluster

Brij S

can provide more info if needed, but really just looking to get some general guidance on VPC setup

Samuli

See this module. It does the setup the way Maciek describes. https://github.com/terraform-aws-modules/terraform-aws-vpc

terraform-aws-modules/terraform-aws-vpc

Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

2019-08-26

Is it possible to disable root login on AWS accounts that are connected to an Organization?

Alex Siegman

I don’t think it is, which is why it’s very important to secure that root account if you created the account programatically - anyone with access to the email could take over the account

2
Alex Siegman

If it’s one you joined that used to be an individual account, I’d hope that access is already secure

2019-08-25

Maciek Strömich

anyone else experienced rds chocking around 1h ago?

Maciek Strömich

we found our pgsql rds instance stopped resolving hostnames

2019-08-25 13<i class="em em-03"></i>47 UTC:<i class="em em-[unknown]@[unknown]"></i>[29469]<i class="em em-WARNING"></i> pg_getnameinfo_all() failed: Temporary failure in name resolution

rumping up db connections and killing our application between 1420 CET

Maciek Strömich

I wonder whether it was RDS general or only our cluster

2019-08-23

oscar

What’s your go-to way of providing external devs/contractors (outside of your corporate AD) access to your AWS accounts? IAM users on Bastion? Cognito?

Samuli

What kind of access you have in mind? Access to accounts or access to resources (ec2?) on accounts?

oscar

Console & CLI access.

I imagine it would be something like:

  • Give [solution] access to consultant
  • Consultant uses [solution] to gain access to either console or gain temporary access id/key pair
  • Consultant can then use console or CLI
oscar

Although we only wish to give them explicit access to our Bastion/Security account, and they then use the credentials above to sqs:assume_role into sub-accounts

Samuli

Isn’t IAM sufficient for that? I would personally go with it but can’t say I’m an expert on the subject

As a consultant it depends on the client Most of the time we get an IAM user in a sharedservices account, then assume roles cross account Others will give us a AD account, then SAML / SSO to an AWS role

oscar

Yeh, it seems that giving consultants limited users on our AD is the favoured approach. Our tech services are looking into it now.. it just doesn’t seem like something that should be managed by Terraform!

Could build the roles out that they would assume at least. For our managed services side, for some clients the client (or us) creates a role in each account that trusts one of our aws account and a specific role in that account. Then we can managed the users who have access to the client’s AWS account without needing to bother them.

Erik Osterman

I think it depends on what they are hired to do for the company.

Erik Osterman

Think about this from the company perspective: they want to eliminate risk, liability, and exposure, embarrassment, while at the same time accelerate development and maintain knowledge transfer.

Erik Osterman

Think about this from the perspective of the developer. They want to operate as unencumbered as possible. They want to quickly prove their worth and get more work.

Erik Osterman

It goes without saying that IAM roles assumed into accounts is one of the mechanisms that will be used.

Erik Osterman

If the contractor was hired to oversee the uptime of production systems, I find it hard to justify anything other than administrator-level roles in the accounts they are responsible for.

Erik Osterman

If trust is an issue, then don’t hire.

Erik Osterman

If the contractor is hired to build out some form of automation, then there should be a sandbox account.

Erik Osterman

The deliverable should include “infrastructure as code” or other kinds automation scripts.

Erik Osterman

I’ll address the latter. Give them a sandbox with administrator level access. They can do everything/anything (within reason) in this account. It can even be a sandbox account specifically for contractors.

Erik Osterman

They’ll check their work into source control with documentation on how to use it.

Erik Osterman

The company is ultimately responsible for operating it and “owning it”, so this forces knowledge transfer.

Erik Osterman

The company and it’s staff must know how to successfully deploy and operate the deliverable.

Erik Osterman

Ideally, you’ve rolled out a GitOps continuous delivery style platform for infrastructure automation.

Erik Osterman

The developer can now open PRs against those environments (without affecting them). The pending changes can be viewed by anyone.

Erik Osterman

Once approved, those changes are applied -> rolled out.

Erik Osterman

Erik Osterman

Regardless of this being a contractor or employee, etc - this is a great workflow. You can radically reduce the number of people who need access at all to AWS and instead focus on git-driven operations with total visibility and oversight.

oscar

Exactly the answer I anticipated from you Erik glad I remembered well

1

2019-08-22

davidvasandani
Amazon Forecast – Now Generally Available | Amazon Web Services

Getting accurate time series forecasts from historical data is not an easy task. Last year at re:Invent we introduced , a fully managed service that requires no experience in machine learning to deliver highly accurate forecasts. I’m excited to share that is generally available today! With , there are no servers to provision. You only need to provide […]

Daniel Minella

Better way to update an ecs task, with only one container. I’m receiving this error: The closest matching (container-instance 5df0ce11-3243-47f7-b18e-2cfc28397f11) is already using a port required by your task

@Daniel Minella if you use the host port 0 in your task definition, ECS will use dynamic port allocation which works good together with the use of an ALB

Daniel Minella

How ECS will handle with that? It understand that a traffic from the LB at port 8080 has to be foward to any container inside the cluster? In that port?

Daniel Minella

Thanks!

Daniel Minella

We made it! Thank you again!

Alejandro Rivera

Hi, I have multiple eks clusters across multiple accounts and I would like to give access to all of them to an S3 bucket in one of the accounts using the IAM profile of the instance nodes, but can’t seem to get it right, any tips on how to get this working?

Alex Siegman

You need two pieces to this:

  1. On the bucket, you need to give permissions such as s3:GetObject as well as add the source roles to the Principals section as well (assume-role policy document)
  2. On the roles that need access to that bucket, you then have to give the permissions for s3 against that resource
Alex Siegman

I do this all the time. The specifics with EKS I can’t help with, but I’d imagine the cluster members have a role they use…

Good example doc here:

https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/

Alejandro Rivera

Nice, thanks for the help @Alex Siegman!

Daniel Minella

How can I run this: docker run -d --name sentry-cron -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-postgres:postgres --link sentry-redis:redis sentry run cron at task definition? My concern is about run cron. Is a command? Something like entrypoint: sh and run cron as command?

Alex Siegman

The run cron would be a command. it would pass through whatever entrypoint script is defined in the Dockerfile

Alex Siegman

Also, probably a better question for #docker

Daniel Minella

Thank you! I’ll try

Daniel Minella

run, cron at command works for me

Daniel Minella

Thank you

2019-08-20

Erik Osterman

thanks @Nelson Jeppesen for the added context

1

2019-08-16

Nelson Jeppesen

Interesting, I thought negative ttl was the last value in the data of the SOA. Are you saying negative ttl is reflected by the SOA ttl directly?

dig <http://abc.com> soa +short
<http://ns-318.awsdns-39.com>. <http://awsdns-hostmaster.amazon.com>. 1 7200 900 1209600 86400
Nelson Jeppesen

in this example, i thought 86400 was the negative ttl, but thats not the TTL of the SOA itself

Nelson Jeppesen

unless I’m mixed up

Nelson Jeppesen

Just looked it up, negative ttl is the lower of either the TTL of the SOA _OR_ the last value, 86400 in the above example

Nelson Jeppesen

TLDR; lazy me dropped the TTL of the SOA to 60s; thanks!

2019-08-15

Hello, what is the main benefit of shortening SOA TTL to 60 secs? I noticed that in your best practices docs.

Erik Osterman

so in highly elastic environments which are changing or subject to change at any time, a long TTL is a sure fire way to “force” an outage.

Erik Osterman

perhaps the most important TTL is that of the SOA record. by default it’s something like 15m.

Erik Osterman

the SOA (start of authority) works a little bit like a “404” page for DNS (metaphor). when client requests a DNS record for something and nothing is found, the response will be negatively cached for the duration of the SOA.

Erik Osterman

so if your app looks up a DNS record (e.g. for service discovery) and it’s not found, it will cache that for 15m. Suppose after 1m that service is now online. Your app will still cache that failure for 14m causing a prolonged outage.

Erik Osterman

a DNS lookup every request will add up, especially in busy apps. a DNS lookup every 60 seconds is a rounding error.

4

2019-08-14

2019-08-13

sarkis

Anyone running AWS Client VPN here? We’re having issues just starting an endpoint even – stuck in Associating/pending state for hours

ruan.arcega

i am using this tool in my aws environment https://pritunl.com

Enterprise VPN Server

Free open source enterprise distributed VPN server. Virtualize your private networks across datacenters and provide simple remote access in minutes.

sarkis

Thanks for the rec - I do have some pritunl experience and it was way smoother of an experience than AWS Client VPN has been - going to propose that

Blaise Pabon

I’m new to AWS… and I make a lot of mistakes running Terraform, so I end up with errors like:

aws_s3_bucket.build_cache: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
	status code: 409, request id: 54C0B6BA
Blaise Pabon

is there a switch like -p

Blaise Pabon

that will back off it already exists.

aknysh

If the bucket is already in AWS but not in the state file, use terraform import

Blaise Pabon

It seems that I cannot import the resource, but it also says the resource is not created because it already exists.

aknysh

That guid is not a resource id

aknysh

It’s a request id from api call

aknysh

Go to AWS console and find the resource id

Blaise Pabon

oh!?

Blaise Pabon

wow

aknysh

If the bucket is in the state file but not in AWS for any reason, use terraform state rm

Blaise Pabon

I think I remember reading about that in…. nowhere ! How very cool.

Blaise Pabon

so I suppose that terraform state rm is less medieval than my rm -rf *tfstate*?

Vitaliy Lobachev

you don’t need to delete the whole state, you can only delete s3: terraform state rm aws_s3_bucket.build_cache

Blaise Pabon

oh sorry I understand now

aknysh

yea because of rm -rf *tfstate* you see the error what you see

Blaise Pabon

the fruits of rm -rf *tfstate*

1

2019-08-12

joshmyers

Anyone had issues with Firehose > ElasticSearch 6.5 ? the ES cluster returned a JsonParseException. Ensure that the data being put is valid.

@Maciek Strömich?

Maciek Strömich

nope. we’re at es5 still for our logging.

joshmyers

@Maciek Strömich Are you Firehose > Lambda processor > ES ?

Maciek Strömich

nope. I’m emulating logstash structure in the logs and pass it directly via firehose to es

joshmyers

Is this data from CloudWatch Logs?

Maciek Strömich

nope. we dropped cwl support because it was a pain to send it to es via firehose

joshmyers

hmm, OK thx

Maciek Strömich

we’re not going to contribute back to rsyslog but we created our solution based on https://github.com/rsyslog/rsyslog/blob/master/plugins/external/solr/rsyslog_solr.py, but instead working directly with es we push everything to firehose using boto3 with the same structure as our app logs. way cheaper compared to cwl as well.

rsyslog/rsyslog

a Rocket-fast SYStem for LOG processing. Contribute to rsyslog/rsyslog development by creating an account on GitHub.

Sharanya

Hey people, looking for terraform template on vpc peering ( syntax 0.12) any help plz

did you look at the cloudposse modules ?

Sharanya

yes

2019-08-08

hi, I have a question about AWS Codepipeline + Jenkins. anyone has experience on this ?

Have some basic idea. What is the question?

2019-08-07

anyone ever turn on aws s3 transfer acceleration?

and verify that its worth it?

just uploaded a 300MB file from Los Angeles to an s3 bucket in Mumbai region. 1.7 minutes with transfer acceleration disabled, and 27 seconds with it enabled. so about 377% improvement in speed

2019-08-01

2019-07-31

ruan.arcega

hi guys, i have one question about authentication and authorization with EKS i’ll try explain my pain…

We have all users and groups centralized in Google G suite, i am finding some way to connect G Suite users/groups to EKS. Manage auth is painful, i want to still using Google G suite, and i am thinking and watching this solution, i don’t know whether works, so, try using AWS Cognito (as identity management) + aws-iam-authenticator into EKS.

Why AWS Cognito? G suite use SAML pattern, and with AWS Cognito has possibility to connect with SAML providers.

I am accepting solutions to get users/groups from Google G suite and use into EKS! Anyone have some experience with EKS authentication ?

mumoshu

AWS has an API called AssumeRoleWithSAML that should work with any IdP that supports SAML. That said, I think you can connect G suite directly to AWS so that the G suite user name [email protected] is accessible via {{SessionName}} in the iam authenticator config(https://github.com/kubernetes-sigs/aws-iam-authenticator#full-configuration-format) which frees you from creating a 1-to-1 mapping of your G suite user to EKS user

kubernetes-sigs/aws-iam-authenticator

A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster - kubernetes-sigs/aws-iam-authenticator

mumoshu

Actually I’m using OneLogin as a SAML IdP instead of G Suite. All I need in the iam authenticator config is several mapRoles entries, one per IAM role

- roleARN: arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:role/Developer
    username: developer:{{SessionName}}
    groups:
    - developers
- roleARN: arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:role/Admin
    username: admin:{{SessionName}}
    groups:
    - admins

The username is translated to something like admin:<my email provided by IdP via SAML, e.g. [email protected]`

ruan.arcega

thanks @mumoshu for your reply i was lost looking many solutions, i guess your suggestion good, but i have one issue now… using aws-iam-authenticator i have to configure identity provider into IAM Console until this step ok.

Into SAML settings i have to setup some parameters like: Single sign on URL, Audience URI (SP Entity ID). I not found Single sign on URL for example after configure into IAM Console.

For these reasons i am looking at AWS Cognito… After this step above is resolved, I can configure aws-authenticator as you show me …

Blaise Pabon

It depends a bit on where you want to consolidate…. For example @mumoshu uses Onelogin to handle all the identities, other people might use Okta. I use gSuite because most of my apps are based there, so when I run Gitlab in EC2, I use google omniauth. I think Cognito is good if you have lots of apps in your EKS that require auth because you can create a Cognito pool and it will create the session tokens for them. However, those apps are already using Google as IdP, then there is little sense in duplicating.

2019-07-30

Partha

Hi All, Is there any way to setup alert for RDS Slow QRY log

Jonathan Le

You can have the Slow Query Logs go to Cloudwatch Logs and then setup something to trigger off what lands in Slow Query Log

Jonathan Le

If you forward the Cloudwatch Logs to something like ELK or Splunk or Sumo, you could then setup an alert in one of those things

Jonathan Le

At my last startup, we ended up sending the logs to ELK and sent a slack notification to have someone review if something was being weird. this helped us turn the bad reports and eventually quiet the alert

Partha

How to forward the logs to Splunk or loggly

Lambda function for RDS Slow Query

Lambda functions are just another great tool provided by AWS to solve issues in a modern way! Using Lambda functions, you can run a micro service without a need to have a server and think of how to configure and maintain it! There are lots of use cases for Lambda functions; here I used it to implement a service which sends alerts in case there is a slow query running in RDS. Of course slow queries are important for developers as it helps them to debug better and improve performance of the application. You can find the code here but there are some other things to be considered: As you may know, there are some ways to trigger a Lambda function. In this case, using CloudWatch Events to schedule it periodically makes sense. The lamda function should have some permissions to get RDS Logs and send alerts using SNS. To find out how to define required rules, please see this AWS documentation. You are also asked to do this when creating Lambda function. There is a parameter named ‘distinguisher’ which is actually the keyword specifying the occurrence of slow query. For ‘Postgresql’ RDS it can be ‘ Parameters Group in RDS should be configured to log slow queries. To know how to do this please see AWS documentation or this guide:Enabling slow query log on Amazon RDS

Partha

Thank you so much Let me work on it

2019-07-29

Sharanya

Hey Folks, Trying to find some Terraform Modules related to AWS - app stream service ( for creating fleets and stacks) any help appreciated

2019-07-28

2019-07-26

Anyone have an idea to how to change the the ecs agent docker image directory to use something else instead of /var/lib/docker ?

How about building your own AMI using packer? Base AMI is the AWS ECS one.

we will do that for sure later

I wanted to just an ebs volume for now

The configuration has to be changed, so an AMI build is necessary.

cool, thanks

in an ECS optimized instance the agent is already pulled and started

2019-07-24

chrism

one thing azure does better than aws is separate the machine from the data; I can resize a vm in azure with a slider / if the hardware under its gone to pot they’ll notify you and migrate the machine. AWS is just one big rotating middle finger to your vms when it comes to stuff like that. Change it yourself; or we’ve murdered it, hope there wasn’t anything important on there

joshmyers

Any folks using fargate here? Good tooling/frameworks?

joshmyers

Am tempted to go with airship as feels robust

joshmyers

But also kinda like the idea of fargate cli for unlimited staging environments

joshmyers

Which means the ECS is out of Terraform control

ciastek

Don’t forget about limit of 50 Fargate tasks per region, per account.

Limits can be changed, fargate is just very expensive if you’re doing 50

Steven

Fargate is a lot cheaper than it used to be. Fargate limits can be increased to 1000’s if needed. I think I have it at 3000 in 2 accounts.

3

2019-07-23

mmarseglia

is there a decent tool to remove all resources within an AWS account?

Erik Osterman

there are a few of them out there

aknysh
gruntwork-io/cloud-nuke

A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke

mmarseglia

ooh nuke

mmarseglia

thank you

mmarseglia

there are quite a few out there..

aknysh

three that are maintained

Erik Osterman
AWS launches a new tool to help you optimize your EC2 resources – TechCrunch

Here is a small but potentially handy update if you’re an AWS EC2 user. The company today launched a new feature called “EC2 Resource Optimization Recommendations,” which does exactly what the name promises. It’s not flashy, it’s not especially exciting, but it may jus…

2019-07-22

Sharanya

looking for a way - when I have a s3 new file upload, I need Jenkins to trigger a new job — CAN anyone help me out ?

Suresh

create a SNS topic for s3 events and create HTTPS subscription on sns

2019-07-19

chrism

The aws eks providers (<https://github.com/terraform-aws-modules/terraform-aws-eks>) pretty nasty; luckily the cloudposse splits it out as I like. Hate nested objects as params in modules (looking at you aws-eks), terraforms dog rough when it comes to isolating things. its nice to have a flat hierarchy of mods

2

2019-07-18

Maciek Strömich

hey, is anyone using localstack’s cloudformation mocks in their local pipelines?

chrism

EKS is … hmm Decided to pop an EKS cluster up via rancher… also hmm but with extra (no ones tested this)

Are there any good TF, secure eks with calico network isolation and NLB (my god does EKS feel like its optimized for AWS’s wallet)

chrism

https://eksctl.io/ seems quite nice

chrism

in the world of “lets all make a new tool for every job”

chrism

Interesting tool; no userdata for the amis though like tf

chrism

The spot configs nice though

chrism

It’s impressive how they made cloudformation sooooooooo god damn slow

Maciek Strömich

don’t tell me about it

chrism

Not being from the parallel universe the created EKS; assuming node groups are dumpable; without manually intervening via the UI like its 1999, is there a way to make sure nodes from the ASG are registered with the ILB All a chap wants is (private net) ILB > (K8 Workers / nginx ingress)

chrism

I tried the k8 ALB module; interesting but who wants to pay per ingress

chrism

if i lean back just hard enough I can see that terraform does what I want

chrism

in that it unglues the extra cloud formation so I can do stuff that wont evaporate

2019-07-17

Maxim Tishchenko

hey everyone, I have AutomatedSnapshotFailure alarm for my elasticsearch but i don have any log, any error in cluster (it it green) does anyone know whot could be a reason of AutomatedSnapshotFailure ? AutomatedSnapshotFailure = Insufficient data

2019-07-16

sarkis
Using AWS Application Load Balancer and Network Load Balancer with EC2 Container Service

Amazon Web Services recently released new second generation load balancers: Application Load Balancer (ALB), and Network Load Balancer…

is using an Aurora Serverless DB cluster with a Heroku app not possible because of this limitation?
You can’t give an Aurora Serverless DB cluster a public IP address. You can access an Aurora Serverless DB cluster only from within a virtual private cloud (VPC) based on the Amazon VPC service.

Erik Osterman

that would appear to be the case. that said, you can deploy a connection proxy

Erik Osterman
CrunchyData/crunchy-proxy

PostgreSQL Connection Proxy by Crunchy Data (beta) - CrunchyData/crunchy-proxy

Erik Osterman
mysql/mysql-proxy

MySQL Proxy is a simple program that sits between your client and MySQL server(s) and that can monitor, analyze or transform their communication. Its flexibility allows for a wide variety of uses, …

Erik Osterman

@ actually you should also be able to create an NLB with a target group to an IP

I’ll try it out. Thanks!

an internet-facing NLB in a public subnet with a target group of Aurora Serverless endpoint IP addresses worked. thanks @Erik Osterman!

1

a downside to this approach.. it seems the NLB’s health checks keep “waking” the cluster, causing it to scale up with no other traffic

1
Erik Osterman

Can you disable all healthchecks?

1
Erik Osterman


Passive health checks enable the load balancer to detect an unhealthy target before it is reported as unhealthy by the active health checks. You cannot disable, configure, or monitor passive health checks.

1

It doesn’t look like I can.

1
Erik Osterman

that’s too bad. but look at it this way… it’s a great way to keep your serverless database awake

1

Yep. It works out fine for prod env.

1

ALB you mean

?

nope, network load balancer

Erik Osterman

@ that’s awesome! glad there’s an easy/reliable/scalable workaround

2019-07-15

Check out your target group configuration

2019-07-14

David

is there a way to map port 443 on the LB to port 80 on the web server if so where do I make that modification?

Alex Siegman

It’s on the load balancer if it’s a classic, application load balancer will be a forward to a target group, target group will specify the destination port

2019-07-13

dalekurt

@ You can look at using VPC peering.

Is It works with app mesh? Aws support told me that is not possible share app mesh service or be used with cross account role

Now I’m thinking that I should use cloud map to route internally in different accounts using vpc link

The current architecture accounts is

The inbound traffic enters in an account called shared services and this traffic is routed to other accounts but for example, when account A with eks app send traffic to eks in account B , it should be sent using aws backbone instead internet

2019-07-12

Alex Siegman

This just went GA today: https://aws.amazon.com/cdk/

AWS Cloud Development Kit - Amazon Web Services

AWS Cloud Development Kit (CDK) is a software development framework to model and provision your cloud application resources using familiar programming languages.

Alex Siegman

Aiming at Terraform it seems.

Hello there someone has some experience using app mesh with k8s making routing im different aws accounts? I have the idea to connect a eks pod app with another eks pod app in another account

The request should be do in aws backbone without go out to internet

2019-07-11

Maciek Strömich

How do you deal with multiple services running on ecs where everyone of them is configured with awsvpc networking?

Maciek Strömich

deal in terms of placing services on instances which e.g. have enough available NICs and doing autoscaling

Maciek Strömich

do you spread applications across multiple clusters and generally don’t care about this issue or do you have a fancy way of having a single ecs cluster with multiple services up and running?

ECS ENI Density Increases · Issue #7 · aws/containers-roadmap

Instances running in awsvpc networking mode will have greater allotments of ENIs allowing for greater Task densities.

1
Erik Osterman

Great link

Erik Osterman
Elastic Network Interface Trunking - Amazon Elastic Container Service

Each Amazon ECS task that uses the awsvpc network mode receives its own elastic network interface (ENI), which is attached to the container instance that hosts it. There is a default limit to the number of network interfaces that can be attached to an Amazon EC2 instance, and the primary network interface counts as one. For example, by default a

Erik Osterman

(Via previous link)

Erik Osterman
[EKS]: Next Generation AWS VPC CNI Plugin · Issue #398 · aws/containers-roadmap

We are working on the next version of the Kubernetes networking plugin for AWS. We&#39;ve gotten a lot of feedback around the need for adding Kubenet and support for other CNI plugins in EKS. This …

2
dalekurt

Did anyone catch today’s keynote at AWS Summit in NYC?

dalekurt

Any thoughts on AWS CDK for IaC?

2019-07-08

Hi, I am setting up my qa env using terraform and after the ec2 is provisioned the instance is failing health check. Any suggestions on why this is happening. I am using the right port number for health check.

aknysh

@ health check from what? A load balancer or from outside?

aknysh

check security groups in any case

Maciek Strömich
Template format error: Unresolved resource dependencies [] in the Resources block of the template

anyone has any ideas what to look for in cloudformation template?

Maciek Strömich

doesn’t matter. fat fingers putting !Ref without pointing it to a resource

2019-07-07

2019-07-03

2019-07-02

Anyone create an ALB to point to a KOPS k8 cluster?

dunno if that could help

are you sure than ALB are available as ingress with kops? I think remember only ELB are possible right now

because ALB needs at least a targetgroup

Erik Osterman

We’ve deployed the alb-ingresss-controller to a kops cluster

Erik Osterman

we’re currently using it with one customer, but would probably not recommend it unless absolutely necessary.

Erik Osterman
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Erik Osterman

using #helmfile

2019-07-01

vishnu.shukla

arniam:user/ad.dt is not authorized to perform: kms:DescribeKey on resource: arnkms634880740321:key/ac43ea17-741a-4347-ac04-88d42ffec899

aaratn

Looks like issue with IAM policy

vishnu.shukla

han it got fixed, thanks

vishnu.shukla

anyone can help me to fix this error

vishnu.shukla

user is trying to access the SQS and SNS and user has full access for SQS and SNS

Nikola Velkovski

@vishnu.shukla I think you should remove your account id

vishnu.shukla

@Nikola Velkovski thanks but got fixed i just created a custom policy for KMS and attached to the user and it worked.

Nikola Velkovski

What I was saying that it’s not the best practice from a security point of view to expose your AWS account id

vishnu.shukla

thats not my AWS id, I edited the AWS ID and user before sending

Nikola Velkovski

ah nice!

Maciek Strömich

@Nikola Velkovski (-:

Nikola Velkovski

¯_(ツ)_/¯

ruan.arcega
EKSworkshop.com

Amazon EKS Workshop

3
Erik Osterman

Oh, I didn’t realize that eksctl was a joint effort.

And now it is offical documentation of aws Eks user guide (added end of May) so it is suggested setup

Erik Osterman
Erik Osterman

I thought it was only by Weaveworks + community

vitaly.markov

yeah, awesome tool for creating EKS cluster compared to AWS EKS Console (web)

2

2019-06-28

2019-06-27

Anyone here using aws-azure-login and has found an alternative ?

Erik Osterman

What’s the objective/end goal?

aws-azure-login is currently used company wide to login employees to AWS who are in the AD of Azure. This itself works great for engineers. Installation is an npm install of a package and it needs a bit of configuration for ~/.aws/config

However, currently, other non-engineers, for example content people, update images on S3 have their own created iam users with specific s3 bucket access and login with a tool like Cyberduck. Currently IAM Users+keys are created for them and it would be much better and safer to provide them access through AD as well.

But supporting the installation of aws-azure-login for the regular users would be too much work for IT and it would be great to have a simpler tool which would be easier to install / maintain.

Erik Osterman

ok, that’s great context. agree that you might not want to have them get all that other stuff setup.

Erik Osterman

perhaps a better solution would be a self-hosted S3 browser sitting behind a web/oidc proxy?

Sharanya

can Sumone help me out with this “The role “arniam:role/Admin” cannot be assumed. There are a number of possible causes of this - the most common are: * The credentials used in order to assume the role are invalid * The credentials do not have appropriate permission to assume the role * The role ARN is not valid”

Erik Osterman

@Sharanya this is unfortunately quite open ended. There are too many possibilities of what could be wrong.

Sharanya

My Terraform is on 0.12.3 , Is there any changelog with Providers ?

Sharanya

@Erik Osterman

Erik Osterman

do you mean that everything was working and it stopped after upgrading?

Sharanya

yea

Sharanya

it was working perfect locally without providers

Erik Osterman

i don’t understand what “locally” means in this context and what “without providers” means since terraform cannot function without providers.

Erik Osterman

you mean you are able to assume the role not using terraform? e.g. with the aws cli?

Sharanya

by Locally I mean terraform was able to do “init” and plan ,

Sharanya

without provider

2019-06-26

Maciek Strömich

that’s one of the best aws news I got today

Maciek Strömich

the other one is the ability to use secrets property in aws::taskdefinition pointing directly to ssm in cloudformation

Maciek Strömich
New – VPC Traffic Mirroring – Capture & Inspect Network Traffic | Amazon Web Services

Running a complex network is not an easy job. In addition to simply keeping it up and running, you need to keep an ever-watchful eye out for unusual traffic patterns or content that could signify a network intrusion, a compromised instance, or some other anomaly. VPC Traffic Mirroring Today we are launching VPC Traffic Mirroring. […]

Maciek Strömich

seems like aws is more and more showing it’s “be nice to law enforcement” face

Lee Skillen

We’re still waiting on cross-region support for Aurora Postgres - That would really make my day.

rohit

we are waiting for it too.

2019-06-25

Maciek Strömich
Introducing Service Quotas: View and manage your quotas for AWS services from one central location | Amazon Web Services

Today we are introducing Service Quotas, a new AWS feature which enables you to view and manage your quotas, also known as limits, from a central location via the AWS console, API or the CLI. Service Quotas is a central way to find and manage service quotas, an easier way to request and track quota increases, […]

Erik Osterman

Does this mean we don’t need to open support tickets any more to raise our limits for EC2 instances?

Alex Siegman

looks like it’s just wrapping the existing way of requesting increases

Alex Siegman

it adds a few nice features though, like service quota templating

Lee Skillen

@Erik Osterman And here I thought it was just us that do the weekly “ask AWS for more of X” dance.

they have to justify the support teams somehow.

1

maybe I’m being mean….

Lee Skillen
04:35:31 PM

looks to see if it has an API to programmatically create requests

Lee Skillen

Quite nice to see everything in one place though, even incl. non-adjustable limits

Maciek Strömich

@Lee Skillen “On AWS everything’s an API” - Werner Vogels

1
Lee Skillen

Seems to be missing quite a few things (e.g. CloudFront + [email protected]), but it’s a good start

Lee Skillen

It’d also be nice to see what the current usage value a limit is, which they must know to actually apply a limit (you’d assume)

1
Maciek Strömich

ya, I would love to see that as well

Lee Skillen

@keen You joke, and yet I think this is the main reason we pay for the Business support (at a painful percentage of our monthly AWS cost), to get quick limit increase turnarounds. I’d drop it in a heartbeat if this turns out to be quick, easy and painless.

Lee Skillen

Last thing I’d love to see is an articulation of the hard limits (since these are soft limits only, for those that are increasable) - I get why AWS doesn’t want to publish them, but it’s nice to know exactly where the headroom extends to for some of the limits. I couldn’t get a sufficient answer from them for the absolute maximum number of CloudFront distributions.

Maciek Strömich

@Lee Skillen i think they aren’t able to provide you with a hardlimit because it changes over time depending on new hardware installation/hardware upgrades/technology evolving/any other reason

Lee Skillen

Yup

Maciek Strömich

aws is constantly evolving and tries to squeeze as much as possible from their setup

Lee Skillen

I agree! That’s what I meant by “I get why AWS doesn’t want to publish them”. They might not even always know for certain.

Maciek Strömich

Maciek Strömich

business support is nice to have. especially when your cf stack goes rogue and hangs in e.g. UPDATE_FAILED_ROLLBACK_IN_PROGRESS

Maciek Strömich

also save us from few other headaches in the past when we put too much trust in developer hands

Has anyone ever run into CodeDeploy failing at AllowTraffic step?

My instance is showing up in the targets as Healthy.. but CodeDeploy just times out after 5min on AllowTraffic

2019-06-24

squidfunk

A question regarding certificate management: I need to set up a bunch of very lightweight HTTP webservers (only a single endpoint) through ECS in different regions (and within each region within 2-3 AZs for HA). The web server Docker images will be built from scratch with a single self-contained binary (with Go). I don’t want to use an ALB, but just leverage Route53’s latency based routing. I know that in case of an outage I will loose some payloads before DNS re-routing kicks in due to DNS caching (3min), but that’s not a problem. This means I need to do TLS termination on the web servers using Let’s Encrypt certificates. Certificate creation can be easily automated with certbot which can cache the certificates. However, those caches are on the specific machines and CSRs are limited to 20 a week, which doesn’t scale. Does anybody have a good solution for centrally managing Let’s Encrypt certificates and keeping them up to date on specific servers?

Alex Siegman

There’s tools that can do this - traefik for example has the ability to store/maintain LE certs; cert-manager in kuberenetes, etc. I just went through needing to automate 1000s of these. Every thing I researched was basically “build your own service to manage it.” We ended up settling on using traefik enterprise because it fit our use case (we needed the reverse proxy anyways), but it doesn’t fit your design parameters.

I didn’t find anything that is basically an off the shelf certificate management service with like an api you could use while building your containers or what not.

Maciek Strömich

@squidfunk for the simplicity I would store generated files in S3 protected with KMS or in SSM encrypted parameter store. then download/get objects from either in entrypoint.sh. This setup works for us pretty well but we don’t have anything critical. for more critical setups we use elb with acm.

1
squidfunk

@Maciek Strömich that’s a great idea, thanks! How do you make sure that certificates are re-generated and web servers are restarted? Cloud Watch Events + S3 Lambda triggers?

Maciek Strömich

as I said it’s nothing critical and if a cert expires nothing bad happens. someone (one of 3 people) on internal network will get a cert warning. we regenerate the certs in a semi automatic fashion based on tasks from our GRC calendar (someone has to run a make command) and last step in that target is to kill the containers that are currently running using aws command line tools.

1
squidfunk

Just as an FYI: I solved it by using the Terraform ACME provider which will automatically generate certificates via route53 DNS challenge and save them in SSM. Terraform will also perform automatic renewal on the next apply when necessary (i.e. expiration date is x days away).

1
Reinholds Zviedris

Question regarding integrating AWS with Azure AD SSO - more specifically using ’aws-azure-login (<https://github.com/dtjohnson/aws-azure-login>) on top of that. I did everything according to this tutorial - <https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial> - and I’m able to login via web browser, but when I configure aws-azure-login` and try to login, I receive following error message. Any clues how to solve it?

Reinholds Zviedris

Has someone seen something like this? Or have an idea what could be wrong?

Reinholds Zviedris

I’m banging my head against the wall for a week already

Reinholds Zviedris

I managed to solve the issue myself. Turns out there I needed to change SAML link in settings from AWS one to MS one.

2019-06-19

Funny issue we had where I work the other day which I’d like to share. One instance could not retrieve from ECR, but another one in another VPC with the same credentials could . The VPC had a S3 endpoint configured with a restrictive policy..

Erik Osterman

aha! tricky

2019-06-14

aknysh

The solution generates cloudfront distribution and caches images there. At least it’s supposed to do so, but before I also noticed that in some cases the images were not cached with Miss from CloudFront response header

aknysh

I didn’t go into the details, but guess the cache request cache headers are generated incorrectly on the origin

aknysh

I would check the cloudfront distribution and see what and how it uses the cache headers

aknysh

And maybe try to override the origin headers

I can’t even see the CloudFront distribution in front of the API Gateway

I see the one in front of the S3 bucket that stores the utility that generates the images and original images

The API Gateway one presumably is under an AWS-owned account

Carl Utter

Question: Can somebody share which AWS Namespace option I should choose?

The Cold Start Prerequisite sections states that you need to have a domain and a namespace setup (I assume the meaning “within AWS”). However, when I choose to create a namespace within AWS I am presented with three options, of which I must choose one… but the Cold Start instructions do not indicate which one I should choose:

1) API calls - the application discovers service instances by specifying the namespace name and service name in a DiscoverInstances request.

2) API calls and DNS queries in VPCs - AWS Cloud Map automatically creates an Amazon Route 53 private hosted zone that has this name.

3) API calls and public DNS queries - AWS Cloud Map automatically creates an Amazon Route 53 public hosted zone that has the same name.

2019-06-13

antonbabenko
11:27:26 AM

It is soon time to know who is going to be at AWS re:invent and want to meet there this year I go

@ The Serverless Image Handler page states “The solution generates a Amazon CloudFront domain name that provides cached access to the image handler API.” Is that misleading then?

Another overview page says “modified images are cached in CloudFront”

CC @aknysh

Unless the intent is to generate the images that are needed and then save them in the S3 bucket

Actually, I may be wrong

Are you using and edge or regional endpoint?

2019-06-12

@ - yes

Although API Gateway does use Cloudfront - it’s not fully featured

2019-06-11

….presumably anyone paying aws bills (who isn’t at the PO level) is using a a credit card with rewards… for non-SP businesses, though, as mentioned that card is a personal card, so probably not the best idea to use for business expenses.

I keep on getting cache missed from CloudFront in front of Api Gateway for Serverless Image Handler

Do I need to enable API cache on the API Gateway Stage for CloudFront to cache the images?

2019-06-05

aknysh
New – Data API for Amazon Aurora Serverless | Amazon Web Services

If you have ever written code that accesses a relational database, you know the drill. You open a connection, use it to process one or more SQL queries or other statements, and then close the connection. You probably used a client library that was specific to your operating system, programming language, and your database. At […]

aknysh
Amazon Aurora: design considerations for high throughput cloud-native relational databases

Amazon Aurora: design considerations for high throughput cloud-native relational databases Verbitski et al., SIGMOD’17 Werner Vogels recently published a blog post describing Amazon Aurora as their fastest growing service ever. That post provides a high level overview of Aurora and then links to two SIGMOD papers for further details. Also of note is the recent announcement of Aurora serverless. So the plan for this week on The Morning Paper is to cover both of these Aurora papers and then look at Calvin, which underpins FaunaDB. Say you’re AWS, and the task in hand is to take an existing relational database (MySQL) and retrofit it to work well in a cloud-native environment. Where do you start? What are the key design considerations and how can you accommodate them? These are the questions our first paper digs into. (Note that Aurora supports PostgreSQL as well these days). Here’s the starting point: In modern distributed cloud services, resilience and scalability are increasingly achieved by decoupling compute from storage and by replicating storage across multiple nodes. Doing so lets us handle operations such as replacing misbehaving or unreachable hosts, adding replicas, failing over from a writer to a replica, scaling the size of a database instance up or down, etc. So we’re somehow going to take the backend of MySQL (InnoDB) and introduce a variant that sits on top of a distributed storage subsystem. Once we’ve done that, network I/O becomes the bottleneck, so we also need to rethink how chatty network communications are. Then there are a few additional requirements for cloud databases: SaaS vendors using cloud databases may have numerous customers of their own. Many of these vendors use a schema/database as the unit of tenancy (vs a single schema with tenancy defined on a per-row basis). “As a result, we see many customers with consolidated databases containing a large number of tables. Production instances of over 150,000 tables for small databases are quite common. This puts pressure on components that manage metadata like the dictionary cache.” Customer traffic spikes can cause sudden demand, so the database must be able to handle many concurrent connections. “We have several customers that run at over 8000 connections per second.” Frequent schema migrations for applications need to be supported (e.g. Rails DB migrations), so Aurora has an efficient online DDL implementation. Updates to the database need to be made with zero downtime The big picture for Aurora looks like this:

The database engine as a fork of “community” MySQL/InnoDB and diverges primarily in how InnoDB reads and writes data to disk. There’s a new storage substrate (we’ll look at that next), which you can see in the bottom of the figure, isolated in its own storage VPC network. This is deployed on a cluster of EC2 VMs provisioned across at least 3 AZs in each region. The storage control plane uses Amazon DynamoDB for persistent storage of cluster and storage volume configuration, volume metadata, and S3 backup metadata. S3 itslef is used to store backups. Amazon RDS is used for the control plane, including the RDS Host Manager (HM) for monitoring cluster health and determining when failover is required. It’s nice to see Aurora built on many of the same foundational components that are available to us as end users of AWS too. Durability at scale The new durable, scalable storage layer is at the heart of Aurora. If a database system does nothing else, it must satisfy the contract that data, once written, can be read. Not all systems do. Storage nodes and disks can fail, and at large scale there’s a continuous low level background noise of node, disk, and network path failures. Quorum-based voting protocols can help with fault tolerance. With copies of a replicated data item, a read must obtain votes, and a write must obtain votes. Each write must be aware of the most recent write, which can be achieved by configuring . Reads must also be aware of the most recent write, which can be achieved by ensuring . A common approach is to set and . We believe 2/3 quorums are inadequate [even when the three replicas are each in a different AZ]… in a large storage fleet, the background noise of failures implies that, at any given moment in time, some subset of disks or nodes may have failed and are being repaired. These failures may be spread independently across nodes in each of AZ A, B, and C. However, the failure of AZ C, due to a fire, roof failure, flood, etc., will break quorum for any of the replicas that concurrently have failures in AZ A or AZ B. Aurora is designed to tolerate the loss of an entire AZ plus one additional node without losing data, and an entire AZ without losing the ability to write data. To achieve this data is replicated six ways across 3 AZs, with 2 copies in each AZ. Thus ; is set to 4, and is set to 3. Given this foundation, we want to ensure that the probability of double faults is low. Past a certain point, reducing MTTF is hard. But if we can reduce MTTR then we can narrow the ‘unlucky’ window in which an additional fault will trigger a double fault scenario. To reduce MTTR, the database volume is partitioned into small (10GB) fixed size segments. Each segment is replicated 6-ways, and the replica set is called a Protection Group (PG). A storage volume is a concatenated set of PGs, physically implemented using a large fleet of storage nodes that are provisioned as virtual hosts with attached SSDs using Amazon EC2… Segments are now our unit of independent background noise failure and repair. Since a 10GB segment can be repaired in 10 seconds on a 10Gbps network link, it takes two such failures in the same 10 second window, plus a failure of an entire AZ not containing either of those two independent failures to lose a quorum. “At our observed failure rates, that’s sufficiently unlikely…” This ability to tolerate failures leads to operational simplicity: hotspot management can be addressed by marking one or more segments on a hot disk or node as bad, and the quorum will quickly be repaired by migrating it to some other (colder) node OS and security patching can be handled like a brief unavailability event Software upgrades to the storage fleet can be managed in a rolling fashion in the same way. Combating write amplification A six-way replicating storage subsystem is great for reliability, availability, and durability, but not so great for performance with MySQL as-is: Unfortunately, this model results in untenable performance for a traditional database like MySQL that generates many different actual I/Os for each application write. The high I/O volume is amplified by replication. With regular MySQL, there are lots of writes going on as shown in the figure below (see §3.1 in the paper for a description of all the individual parts).

Aurora takes a different approach: In Aurora, the only writes that cross the network are redo log records. No pages are ever written from the database tier, not for background writes, not for checkpointing, and not for cache eviction. Instead, the log applicator is pushed to the storage tier where it can be used to generate database pages in background or on demand.

Using this approach, a benchmark with a 100GB data set showed that Aurora could complete 35x more transactions than a mirrored vanilla MySQL in a 30 minute test.

Using redo logs as the unit of replication means that crash recovery comes almost for free! In Aurora, durable redo record application happens at the storage tier, continuously, asynchronously, and distributed across the fleet. Any read request for a data page may require some redo records to be applied if the page is not current. As a result, the process of crash recovery is spread across all normal foreground processing. Nothing is required at database s…

aknysh

Interesting articles ^

2019-06-04

new tool I released :

claranet/go-s3-describe

A tool to list all S3 Buckets of an AWS account with their main statistics. Buckets are sorted by size. - claranet/go-s3-describe

@ I mean if I use a classic AWS key/secret key, they workflow in your tool will use AWS CLI profile? I would rather use aws-vault

I understand you using federation/STS. Merci pour le retour

If I get it, you don’t add you keys in .credentials/.config but you use aws-vault to store them and get them on the fly?

but after getting them, when you create a session, AWS provides you a token and this token is used by CLI or anything else to auth your requests

I guess you can keep this pattern, as you could do with AWS cli or else

yep I don’t store my keys in aws cli config as they are stored in plain text

99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

have you had any problems using this in assume_role setting with MFA with ruby gems ?

99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

I was trying terraforming to do a backup and with aws-vault it does not seems to work

although any other cli command works

does your script use the credentials from ENV or try to grab a profile in AWS CLI

is not my script, I’m using terraforming project

dtan4/terraforming

Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming

it uses the ruby aws sdk

so it should use the session env , region etc

aws-vault exec home -- env \| grep AWS

perfect, my tool ca use env vars

it should work as well

I’m curious about a test if you can

I will do it later once I get my backlog cleared . Will keep you updated

thank, I let you comment and close your issue if you can

it populates them as env variables in temp context

by the way there is no licence in your released code.

oh shit, thanks, I thought it was

fixed yesterday

add License · claranet/[email protected]

Easy switch between AWS Profiles and Regions. Contribute to claranet/aps development by creating an account on GitHub.

2019-06-03

claranet/aps

Easy switch between AWS Profiles and Regions. Contribute to claranet/aps development by creating an account on GitHub.

2

2019-06-02

2019-05-31

@Tim Malone @Daniel Lin https://github.com/claranet/sshm

claranet/sshm

Easy connect on EC2 instances thanks to AWS System Manager Agent. Just use your ~/.aws/profile to easily select the instance you want to connect on - claranet/sshm

2
1
1

need a binary?

i’m planning to use goreleaser soon

doesn’t support aws-vault

@atom seem you are the maintainer in Oxalide

sorry don’t know what you mean - must be a different Thomas

me I guess

would be great if we can keep the credential in safe vault instead of plain clear text

we don’t have secrets in our aws/config

we use our adfs to connect on our main account and then switch role

can you tell me more about what you want please

2019-05-30

Tim Malone

that looks nice @! i don’t suppose… you’ve open sourced that?

1

I want to, still discussing about

2
Abel Luck

so testing session manager has been going well. our team likes it.

Abel Luck

we don’t copy files, so that hasn’t been an issue

Abel Luck

however it turns out we did use ssh tunnels to access RDS postgres instances for running ad hoc analytics/queries

Abel Luck

thinking about how best to manage that now

@Abel Luck same trouble for us, don’t find out yet how to manage tunnels to access to RDS

we do copy files, but we use S3 for that, even with presigned URL to PUT/GET

the most important gain for my part, is that I’m able to quickly connect to any instance while an on-call triggered without waiting VPN be UP

I’m thinking about how to integrate direclty in my golang app the websocket connection, no extra dependency

as you seem to be several interesting, will try to release tomorrow (need to find out a better name)

Abel Luck

we use metabase for doing adhoc queries to share with other folks and it works quite well, gotta get devs/sysadmins to use it too now

Abel Luck
Metabase

The fastest, easiest way to share data and analytics inside your company. An open source Business Intelligence server you can install in 5 minutes that connects to MySQL, PostgreSQL, MongoDB and more! Anyone can use it to build charts, dashboards and nightly email reports.

1
Abel Luck

though it’s more for read-only querying.

Abel Luck

maybe an instance with pgadmin accessed via vpn would suffice too

this a kind of phpmyadmin or adminer?

Abel Luck

pgadmin is like phpmyadmin but for postgres (and way better). Metabase isn’t like any of those.. it really stands alone. great tool.

a tool I wrote and I would like to make FOSS is that :

Abel Luck

yea

ok, will give it a try

but a lot of our customer (we’re a managed services provider), use mysql benchmark or other apps on their laptops

and more mysql than postgr (sic)

Abel Luck

yea, then some sort of tunnel will be needed

Abel Luck

one could always configure ssh such that it allows database connections but not shell access

sure, but still need a kinf of bastion

Abel Luck

yea indeed

lambda + spot instance could provide us a “BaaS”

bastion as a Service

you call an API Gateway with your credentials, it create an Ec2 with good SG and only opened to your IP, and tadaaa

if lambda detect there’s no more traffic for a while, we terminate it

Abel Luck

yea nice

Abel Luck
Match User rds-only-user
   AllowTcpForwarding yes
   X11Forwarding no
   PermitTunnel no
   GatewayPorts no
   AllowAgentForwarding no
   PermitOpen your-rds-hostname:5432
   ForceCommand echo 'No shell access.'
Abel Luck

that sshd config, i think, is all you need to allow port forwarding only

thanks

explanation : a fully golang app which list all your aws ressources with a high granularity and store all ressources with their links between in a graphDB

you can query and get a lot of facts

easy to find out which ec2 have access to rds

which ec2 are worldwide open on port 22

ec2 without snapshots

etc etc

Abel Luck

super cool!

with new UI, it looks like that

after a big clean up and some doc, i will share it for sure

Tim Malone

That looks great! Would love to take it for a spin

can you not give metabase the RDS hostname? or is it in a different VPC?

Abel Luck

exactly, deploy metabase inside the vpc, route through a LB.

2019-05-29

Bogdan

anyone encountered The AWS Access Key Id you provided does not exist in our records after aws sts assume-role and exporting all the output into ENV vars?

We use SSM too, I wrote a small Go program that let us to select instance quickly

you can select your instance with arrows, of filter by taping something, and an enter connect you to the instance directly

IAM manages who can access

2019-05-28

Abel Luck

I’m thinking of trying out the system manager sesssion manager feature of aws, and do away with a bastion entirely. we only need shell access for debugging, so accessing it through the session manager will get us auditing and remove the need for key management.

Suresh

Ansible should straight away help instead of startup scripts / AMI builds.

Tim Malone

Session manager is great - love being able to use IAM to control ‘SSH’ access. Only thing it’s missing is if you need to SCP stuff - we’ve written a quick wrapper for aws s3 cp to make that feel a bit more native (basically using S3 as a proxy of sorts, so you have to run it both locally and remotely to push/pull the file you want).

2019-05-27

Abel Luck

I’m looking for a solution to manage ssh access to internal hosts via a bastion box. Teleport way more than what we need.

Abel Luck

my team all use SSH keys on physical tokens. so far we bake the ssh keys into the AMIs

Abel Luck

but revoking a users access requires rebuilding the amis, which isn’t ideal.

Abel Luck

Hoping to find a simple system to dynamically add/remove keys

in the past I’ve put public keys in s3, and baked a script on each box that is set to run every 15 minutes which adds/removes the keys on a host based on what’s in s3

then if you want to revoke a key, you just delete it from s3

kinda low tech, but does the job

Abel Luck

that is a simple solution!

Rice Bowl Junior

Maybe via Secret Manager? And load the Keys on startup via a common tag or something like that. When you want to delete an access, juste delete the secret manager entry.

Haven’t tried that, just suggesting things

bastion is easier in the sense you revoke keys to users only on the bastion/s host/s

and then you can just open ssh from that specific host

Abel Luck

so you enable ssh without authentication from the bastion to the internal host?

Yes

Ohhh wait

You mean without having to copy the keys?

Tim Malone

It’s probably a good idea to have the keys on both the bastion and the internal host… one can also then use proxycommand to jump. You can still revoke access by just removing the key on the bastion, and then removing from internal hosts at a more leisurely pace

yes, you will have to have all the keys copied over somehow with their home dirs and authorized_keys in .ssh unless you want to share a unique key in the bastion but taht is far more insecure

Juan Cruz Diaz

Hi everyone! I’d like you to invite to the next chapter of our webinar series next Thursday 04/30, where we’ll talk about how to create and administrate a productive environment reaching operational excellence, and how to incorporate this processes to your workspace.

It’s a great learning opportunity no matter what role you have, as long as your business relies on IT workloads.

See you there!

https://www.eventbrite.com.ar/e/alcanzando-la-excelencia-operacional-tickets-62208718953

Alcanzando la excelencia operacional

Métricas, herramientas y buenas prácticas para monitorear tus entornos cloud. ¿Qué verás? La importancia de la de excelencia operacional en la nube y cómo abordarla  Preparación de  tu entorno para operar en producción Ahorrar problemas y noche de insomnio al equipo de operaciones Creación de tu ecosistema de herramientas, métricas y alarmas para obtener un monitoreo proactivo y predictivo en producción

AgustínGonzalezNicolini

Hey @Juan Cruz Diaz where are you from?

Juan Cruz Diaz

Hi Agus. I’m from Argentina. So, if you want we can talk in spanish

AgustínGonzalezNicolini

Not sure everyone will catch with it jajaj but still good to have a local guy

soy chileno…..

por favor no peliemos

quizas deberiamos crear un terraform-es channel

AgustínGonzalezNicolini

JAjaj we won’t fight for any reason

AgustínGonzalezNicolini

@Erik Osterman would you mind if we created a #terrafom-es channel? the guys here would like that

Erik Osterman

Sure!

AgustínGonzalezNicolini

Awsome, we should all commit to passing on to english anything significant or relevant to all

Erik Osterman

I have created the #terraform-es channel

AgustínGonzalezNicolini

Thanks Erik

2019-05-22

Bogdan
12:03:20 PM

trying my luck here as well

hey everyone! Is there an easy way to also store/export/save apply outputs to SSM Parameter Store? The main reason being so that they’re consumed by other tools frameworks which are non-Terraform?

Maciek Strömich

ssm parameter store has few purposes which one of them is not exposing e.g. secrets. the correct way to do it is to integrate other tools with ssm parameter store not expose them via terraform

Maciek Strömich

(and yes I can understand that it’s not always possible)

Bogdan

thanks @Maciek Strömich - I’m not using Terraform to expose them, but to provision infrastructure. Once provisioned successfully the ARNs, IDs and names of those resources are stored in the JSON-like state file. If i’d like to reference them from another framework like Serverless or CDK I need to use HCL and the remote state datasource. The reason for using SSM (Parameter Store) which also has String and StringList types is to allow others to get the IDs/ARNs/etc of the resources built with Terraform

Maciek Strömich

ah, that makes more sense

Maciek Strömich

I’m not even a terraform noob so can’t help with that.

Maciek Strömich

I misread your message

Maciek Strömich

sorry for making noise

aknysh

@Bogdan we do it all the time - store TF outputs in SSM for later consumption from other modules or apps

aknysh

for example, https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/main.tf#L190 - here we save a bunch of fields into SSM

cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

aknysh

Then we use chamber to consume them

aknysh
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

aknysh

You can use any other means to read them, terraform or SDK

Bogdan

@aknysh I saw that, but unfortunately not all modules are built like yours: meaning that your solution is module dependent, so if I haven’t written the module myself just using one from the registry I can’t go and create N+1 PRs to add SSM params to all the modules that are open-source

Bogdan

@aknysh I could however use the module outputs and do them outside

aknysh

You can wrap any module in your module and then add SSM write for the outputs

Bogdan

which is what I’ll try to do, but it’s still a suboptimal solution as I have to do it everytime I create a resource/module I’d like in SSM

aknysh

That’s what you have to do anyway since not all modules will need SSM

Bogdan

I started building something that iterates through terraform state list, then calls terraform state show on a particular type of resource - like VPCs, subnet_ids, etc

Bogdan

so I don’t have to do it at module init or handle it on a per-module basis

Bogdan

it’s just a pity that terraform state show doesn’t return JSON

Bogdan

and I have to combine head and awk for getting the value that interests me, which then I have to aws ssm put-parameter with

aknysh

Hmm, that should work, but looks complicated. Why not just assemble low level modules into a top level module and then write just those outputs that you really need to SSM

1
Bogdan

I’m actually considering doing that since it’s faster

aknysh

there is a separate module for writing and reading to/from SSM https://github.com/cloudposse/terraform-aws-ssm-parameter-store

cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

1
Bogdan

@aknysh did you also get ` error creating SSM parameter: TooManyUpdates`?

Erik Osterman

I have definitely run into this problem

Erik Osterman

It’s just another terraformism

Erik Osterman

No way to work around it other than to rerun

Bogdan
aws_ssm_parameter TooManyUpdates error · Issue #1082 · terraform-providers/terraform-provider-aws

Terraform Version 0.9.11 Affected Resource(s) aws_ssm_parameter Terraform Configuration Files variable &quot;configs&quot; { description = &quot;Key/value pairs to create in the SSM Parameter Store…

aknysh

how many params are you writing at the same time?

aws_ssm_parameter TooManyUpdates error · Issue #1082 · terraform-providers/terraform-provider-aws

Terraform Version 0.9.11 Affected Resource(s) aws_ssm_parameter Terraform Configuration Files variable &quot;configs&quot; { description = &quot;Key/value pairs to create in the SSM Parameter Store…

Bogdan

10-15

Bogdan

but the error went away after a subsequent apply

sarkis

hmm seems like AWS rate limiting

sarkis

I wonder if it’s a safety mechanism so they can preserve the history of changes since SSM parameter store does keep a version history of changes… if it’s eventually consistent like most AWS resources, this is my guess on why this limitation is there

2019-05-17

Maciek Strömich
localstack/localstack

A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline! - localstack/localstack

2019-05-15

joshmyers
Automated AWS logging pipeline • Josh Myers

The problem space Back in the day, a logging pipeline was a pretty manual thing to setup and manage. Yes, configuration management tools like Chef/Puppet made this somewhat easier, but you still had to run your own ELK stack (OK, it didn’t have to be ELK, but it probably was, right?) and somehow get logs in there in a robust way. You’d probably be using some kind of buffer for the logs between source and your ELK stack.

1
vishnu.shukla

IAM user has AmazonS3FullAccess then also she fails to upload and download the file

vishnu.shukla

any clue why?

Maciek Strömich

@joshmyers

A CloudWatch Log Group can subscribe directly to ElasticSearch, so why bother with Kinesis and Lambda? Flexibility and ElasticSearch lock-in

Subscribing to ES from CW logs requires a lambda function that will translate gzipped format of CW Logs into ES. If someone did not automate it and just clicked subscribe then the lambda will be created automatically but automation requires maintaining this lambda function.

joshmyers

Ah, good to know. This was going back some years now, not sure what may have changed too

joshmyers

Am working on something similar now, but significantly more complex for a global client

joshmyers

~15TB a day of log data

Maciek Strömich

only app logs or os logs as well?

joshmyers
  • flowlogs + elb + s3 + ALL THE LOGS
Maciek Strömich

ah

Maciek Strömich

yeah that can be complex to maintain

joshmyers

Across multiple (read: hundreds) of AWS accounts and back to a Splunk cluster backed by tin on-prem

joshmyers

You can guess where the bottleneck is

Maciek Strömich

depends on the on-prem part

Maciek Strömich

joshmyers

spoiler: It is terrible

Steven

If you use fargate, it can log directly to splunk now (small good news for you :)

joshmyers

What happens when Splunk is having issues?

Steven

Not sure. I haven’t looking into it in any detail (we don’t use splunk). Just noticed a week ago when we were looking for alternatives for getting logs from fargate containers. But I’d suspect things getting dropped. Cloud to on-prem for real time logging is a bad idea in general unless you can manage the uptime and bandwidth of cloud providers

Maciek Strömich

I guess that even our office connectivity which is ~1Gbps would be a problem in such a setup

Maciek Strömich

also awslogs can stream to splunk directly afair

joshmyers

the awslogs agent?

joshmyers

That is news to me

joshmyers

This particular engagement is more complex because Splunk is on-prem

Steven

Splunk was added to fargate logger about 2 weeks ago. Not sure when it was added to awslogs agent.

joshmyers

I can’t find any info on awslogs agent and Splunk

Steven

awslogs is specific for logging to CloudWatch. Was surprised when @Maciek Strömich said it could log to splunk. I can’t find anything either. Just the standard stream from CloudWatch to splunk via lambda, etc

Maciek Strömich

ah. it’s not awslogs but docker logging driver that logs to splunk

Maciek Strömich

my bad

Maciek Strömich

i thought that defining logging in Dockerrun.aws.json configures awslogs to ship logs to any of the available loggers

2019-05-13

Bogdan

Anyone here used https://aws.amazon.com/solutions/centralized-logging/? I’m considering but at the same hesitating due to costs (their cheapest cluster starts from 35 USD/day - https://docs.aws.amazon.com/solutions/latest/centralized-logging/considerations.html#custom-sizing as well as complexity (logs are first collected in CW Logs then get to ES)

Design Considerations - Centralized Logging on AWS

Regional deployment considerations

Bogdan

I’d much rather prefer them being sent directly to an ES cluster via an Interface VPC Endpoint of course

Tim Malone

Haven’t used that solution, but most AWS-vended logs end up in either CloudWatch or S3 (there’s no native ability to send to ES) so unfortunately there’s not much way around the complexity. For logs on instance, though, I would recommend something like Filebeat rather than going via CW

Maciek Strömich

We use kinesis firehose to send logs directly to es

Maciek Strömich

And simple lambda to send the ones that end up in cloud watch logs

Maciek Strömich

Much simpler than what AWS proposed in this doc

2019-05-11

Maciek Strömich

how do you set --dns-opt in AWS ECS? it’s not avaialble via dns settings in ecs agent. i know that I can update resolv.conf via entrypoint.sh custom script but I wonder if there’s a better/easier way

2019-05-10

anyone else have issues with route53 not resolving dns for you lately?

Alex Siegman

how “lately” do you mean? it’s been fine for us and we rely on it heavily for inter-service connections

like the past week

been having intermittent issues where dns doesnt get resolved for a few minutes

Alex Siegman

We haven’t noticed anything, but, they are reporting a problem right now: https://status.aws.amazon.com/

Alex Siegman

But that only affects modifications/new record sets, we don’t change our entries often

yeah

ok

i was just gonna say, today i have a cname set up for 45 minutes now and its still not resolving

Alex Siegman

looks like that above issue might be why

yeah thanks man, i was just assuming it had to do with the other intermittent issues ive been seeing this past week

Lee Skillen

What’s the domain or TLD at least? It might not be AWS. We occasionally have issues with the io domain because the root name servers can be flakey. :)

com @Lee Skillen

Maciek Strömich

If anyone started to experience issues with MySQL RDS it’s because of today’s route53 outage

Maciek Strömich

because how connection is being established with mysql you can see a lot of

\| 23653344 \| unauthenticated user \| ip:port  \| NULL    \| Connect \| NULL \| login            \| NULL                  \|

in show full processlist;

Maciek Strömich

you can set skip-name-resolve to 1 in your parameters group fix the issue

2
Maciek Strömich

sadly it’s a partial fix ;-/

2019-05-09

vishnu.shukla
vishnu.shukla

Hi, can anyone help, why the deployment is failing

vishnu.shukla

here is the details what i can see “Invalid action configuration The action failed because either the artifact or the Amazon S3 bucket could not be found. Name of artifact bucket: codepipeline-eu-central-1-516857284380. Verify that this bucket exists. If it exists, check the life cycle policy, then try releasing a change.”

vishnu.shukla

but the bucket exist and there is no policy as well

aknysh

@vishnu.shukla without seeing the code, it’s difficult to say anything. Make sure you configured the input and output artifacts for all stages correctly (the output from one stage should go to the input to the next stage), for example https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L233

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

aknysh

also make sure you setup all the required permissions to access the bucket https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L101

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

vishnu.shukla

Sure Aknysh, thanks a lot

2019-05-08

oscarsullivan_old

AWS London today.. anyone attending?

oscarsullivan_old

Good array of breakout sessions avlb

@Erik Osterman it can’t be the cluster if I do an nslookup on the hostname and it doesn’t resolve the ip addresses. At that point is it a route 53 issue? (Happens intermittently for a few minutes)

Erik Osterman

is it on a delegated zone?

Erik Osterman

I’ve had the problem where one of the NS servers in the delegation was wrong

Erik Osterman

so in a round robin fashion some requests would fail

Erik Osterman

same goes for the TLD

Erik Osterman

if the nameservers are off

ah no i just double checked them. this intermittent issue has been happening only for a week, but ive had the same route53 record for over a year

Erik Osterman

is it only from your office or home?

Erik Osterman

e.g. maybe switching your own NS to 1.1.1.1 may help

no multiple customers

also we use uptimerobot and that will report some downtime too when it happens

for very brief period of time

this might help, when I have that brief period of downtime, it doesn’t resolve the ip addresses during nslookup:

$ nslookup <http://blah.example.com>
Server:        192.168.1.1
Address:    192.168.1.1#53

Non-authoritative answer:
<http://blah.example.com>    canonical name = <http://app.example.com>.

healthy:

$ nslookup <http://blah.auditboardapp.com>                                                                
Server:		192.168.1.1
Address:	192.168.1.10#53

Non-authoritative answer:
<http://blah.example.com>	canonical name = <http://app.example.com>.
Name:	<http://app.example.com>
Address: 54.191.49.21
Name:	<http://app.example.com>
Address: 54.203.171.148
Name:	<http://app.example.com>
Address: 54.212.199.41

Also only started happening since Saturday but 3 times now

But yes on k8s and I don’t see anything out of the ordinary in the kube-dns logs

2019-05-07

AgustínGonzalezNicolini

I’m told by one of our devs that you should see if there is a way for you code to compile and excecute code, because lambda is simply a code container.

AgustínGonzalezNicolini

the alternative is to find a way to compile the incoming code, get the compiled file (.jar or .zip file por eg.) and upload it to a termporal lambda, and the excecute that lambda

rohit

i did not find similar usecases online

anyone ever get this error DNS_PROBE_FINISHED_NXDOMAIN intermittently with route 53

accessing a web app

Erik Osterman

Is this under k8s?

Erik Osterman

Check the logs for kube-dns. Had lots of problems like this in the past, but that was way back on Kubernetes 1.7 or earlier

2019-05-06

Alex Siegman

I think just recently I saw a note about a better tool/ui to look at your AWS Parameter Store Secrets, but for the life of me I can’t find it. The searching is terrible in the AWS console. All I really want is fuzzy search and proper pagination on searches >.< Anyone have something they know about?

Harry H

Was it this one? https://github.com/smblee/parameter-store-manager

Saw this a few days ago

smblee/parameter-store-manager

A cross platform desktop application that provides an UI to easily view and manage AWS SSM parameters. - smblee/parameter-store-manager

Parameter Store > AWS System Manager or EC2 scroll down the left panel

Alex Siegman

@Harry H that was it i believe. Thanks!

rohit

anyone here used lambda as a compiler ?

rohit

I want to send the code enter in a code editor inside a web application to lambda and compile it and get the results back

rohit

let me know if i am crazy

Tim Malone

i’m sure someone’s done it! ppl have done all sorts of awesomely crazy things in lambda

Tim Malone

sounds like a fun project. you can shell out to anything so i suspect if you packaged a compiler with your code (or downloaded it from s3 during function invocation) then you could run it

rohit

I want to build some sort of endpoint which accepts code sent from my web app and compiles

rohit

i am trying to figure out how companies like hackerrank,http://repl.it do it

2019-05-02

Any way to use presigned URL uploads and enforce tagging?

Is there any way to issue a presigned URL to a client to upload a file to S3, and ensure that the uploaded file has certain tags? Using the Python SDK here as an example, this generates a URL as de…

To clarify, I need to use {'tagging': '...'} with the verbose XML tag set syntax for both fields and conditions, and that seems to work as required.

Golang SDK (I do know it much better) says :

// The tag-set for the object. The tag-set must be encoded as URL Query parameters.
// (For example, "Key1=Value1")
Tagging *string `location:"header" locationName:"x-amz-tagging" type:"string"

`

rohit

@ thanks. Unfortunately, i did not any luck

2019-05-01

aknysh

Apple spends more than $30m a month on AWS, though this number is falling. Slack’s AWS spend is a contractual $50M/year minimum.

Lee Skillen

… And I thought our bill of ~$20k/year was terrifying

Erik Osterman
Lyft plans to spend $300 million on Amazon Web Services through 2021

Lyft has signed up to pay Amazon Web Services at least $80 million per year for the next three years, totaling at least $300 million.

Lee Skillen

Better to sell shovels than mine gold during a gold rush, eh.

rohit

Does anyone know if it is possible to add tags to S3 object (node sdk) when using presigned url ?

rohit

maybe this is not a right place to ask this question

rohit

but i just did

1
Erik Osterman

Hrmmmmm good question. Don’t think it’s possible natively, but anything is possible with lambdas

rohit

that’s true but that’s not something i want to do in my scenario

Erik Osterman

Yea, I wouldn’t want to either

rohit

let me explain my scenario so that you can better understand

rohit

when i want to upload something from my app, i am making a request to my backend service(nodejs) which returns presigned url then i use that in my frontend to directly upload the object from the browser

rohit

it says

Note: Not all operation parameters are supported when using pre-signed URLs. Certain parameters, such as SSECustomerKey, ACL, Expires, ContentLength, or Tagging must be provided as headers when sending a request.
rohit

so i tried sending the tags in the ajax request with presigned url and i get invalid tag error

2019-04-30

Lee Skillen

Yup, you can use an Origin Access Identity on the CloudFront distribution which has access to the S3 bucket via a policy. If you need your own auth at the CDN level you can implement it with a small Lambda @ Edge too. :)

1
rohit

@Lee Skillen thanks for answering my question

rohit

what would be advantage of using Origin Access Identity over presigned URL/cookies ?

Lee Skillen

It’s transparent to those using the CDN for a start, and it means you don’t need to generate presigned URLs upfront. The downside is that you might still be interested in protecting the content, so need to think about auth in a different way.

Lee Skillen

What kind of content is it?

rohit

it is all media files

Lee Skillen

Do you need auth? What if someone obtains a URL and distributes it to others?

aknysh

if all content of the bucket is not a secret, then having private or public bucket does not make any difference since your users will see all the files via CloudFront. With a private bucket, use origin access identity as @Lee Skillen mentioned

aknysh
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

1
rohit

we want to make the media files private, so planning to use private bucket

Lee Skillen

Some small difference is that a public bucket can be listed but not necessarily true with CloudFront. It may make a difference if you don’t care if files are public, but also don’t want easy access to all of the other files. It wouldn’t make sense for me, but I have seen people do this with URLs that are impossible to guess upfront (e.g. with randomised prefixes or something else in the URL). But if you go to that extent, I would just throw an auth method in there via a Lambda @ Edge. :)

1
aknysh


I have seen people do this with URLs that are impossible to guess upfront

aknysh

that’s security by obscurity bad idea, never works

Lee Skillen

Depends on your goal, but I broadly agree :)

rohit

i haven’t used cloudfront + Lambda Edge, did you use any auth method with Lambda Edge ?

Is there a way to automatically update Amazon AMIs on a launch template to the latest, instead of having to rerun terraform against it on monthly basis?

Lee Skillen

@rohit It depends how fancy you want to get. How are users authenticated before they access the CDN? I assume you wouldn’t want them to have to pass a username/password via basic auth if they were already authed before? In fact, are they authed at all or “anonymous”? Is the CDN on a subdomain of your main app website (if any)? You said media before, is it for static assets, downloads or streaming? Lots of questions and possibilities. :)

2019-04-29

chrism

When you accidentally spin up a k8 cluster using T instances and the cpu-burst wipes out leaving 1 node running at ~20% of cpu

Lee Skillen

All kinds of awesome: https://infrastructure.aws/

AWS Global Cloud Infrastructure

The AWS Global infrastructure is built around Regions and Availability Zones (AZs). AWS Regions provide multiple, physically separated and isolated Availability Zones which are connected with low latency, high throughput, and highly redundant networking.

4
rohit

Is there a way to get objects from AWS S3 private bucket using cloudfront without presigned URL’s ?

2019-04-25

vishnu.shukla
vishnu.shukla

can someone help me one this.

Exequiel Barrirero

Just in case anyone find it useful.

AWS Management Console down without region #aws #outage (https://status.aws.amazon.com/) • (link: https://us-west-2.console.aws.amazon.com/console) http://us-west-2.console.aws.amazon.com/console works • (link: https://us-east-2.console.aws.amazon.com/console) http://us-east-2.console.aws.amazon.com/console works • (link: https://us-east-1.console.aws.amazon.com/console) http://us-east-1.console.aws.amazon.com/console does NOT work (link: https://console.aws.amazon.com/console/home) http://console.aws.amazon.com/console/home

So you can basically by-pass the error specifying the console region in the access url. To hit a specific service in us-east-1 you can use the service URL, eg: <https

2019-04-23

nutellinoit

Hello, anyone using ec2 spot fleet plugin with jenkins?

2019-04-18

vishnu.shukla

hey all, I am stuck at AWS code build with ruby framework, seeking help on this

Tim Malone

post what you’re stuck on - someone might be able to help

vishnu.shukla
vishnu.shukla

there must be issue with the buildspec.yml file or image

vishnu.shukla

i tried with many other option as well

vishnu.shukla

everytime i get different error

xluffy


exit status is 127

xluffy

It means command not found

vishnu.shukla

its like on of the error

xluffy

see above line, doesn’t have sudo command

vishnu.shukla

does AWS code build provide docker image with mysql installed on this?

oscarsullivan_old

I attended a security webinar from AWS. here are my notes

https://github.com/osulli/security-strategies

osulli/security-strategies

Notes from AWS’ Security Strategies webinar by Tim Rains - osulli/security-strategies

vishnu.shukla

@xluffy do you have any buildspec.yml file for Ruby to use on AWS code pipeline.

vishnu.shukla
vishnu.shukla

here is the latest error i am getting

xluffy

see error, this is another error, in this container, doesn’t any JS runtime (u need a container with js runtime)

oscarsullivan_old

No js installed or avlb in PATH

vishnu.shukla

provided AWS docker image doesn’t have it , i tried installing it but no success, also AWS code build has only single image for Ruby

2019-04-17

mumoshu
03:00:17 PM

@mumoshu has joined the channel

Does anyone know of a good way to tag shared resources for billing reporting/monitoring purposes? For example, if I have an ALB that’s in front of two web apps - W1 and W2, can I have a billing report that includes 1/2 of the ALB cost with W1 and the rest with W2?

I’m FinOps, trust me, you can’t do that

we could try with a lot of custom but It will not be relevant

you can spread your cost by tags, but the scope will be the whole ALB, not only a part

Thanks

Erik Osterman

@ are you using Kubernetes by anychance?

No, not using Kubernetes

2019-04-16

Alex Siegman

This just popped in to my email: https://aws.amazon.com/app-mesh/

I’m wondering if this could also integrate with say GKE for multi-cloud application networking. I also wonder how that integrates with EKS, since I’ve seen envoy in use primarily as a app mesh for K8S

AWS App Mesh - Application-level networking for all your services - Amazon Web Services

AWS App Mesh is a service mesh that allows you to easily monitor and control communications across services.

Pablo Costa

App-mesh an AWS version of istio. Look for Istio service on GKE

AWS App Mesh - Application-level networking for all your services - Amazon Web Services

AWS App Mesh is a service mesh that allows you to easily monitor and control communications across services.

anyone know if we can use different encryption keys for a single database storage in an RDS instance? (multi-tentant rds instance)

Abel Luck

interested to know this as well

chrism
godaddy/kubernetes-external-secrets

Kubernetes external secrets. Contribute to godaddy/kubernetes-external-secrets development by creating an account on GitHub.

Pablo Costa

Nice project, but the problem is that AWS Secrets is quite expensive: https://aws.amazon.com/secrets-manager/pricing/. Using chamber with AWS Systems Manager Parameter Store hasn’t praticaly any costs https://aws.amazon.com/systems-manager/pricing/

AWS Systems Manager Pricing – Amazon Web Services (AWS)

There is no additional charge for AWS Systems Manager. You only pay for AWS resources created or aggregated by AWS Systems Manager.

Erik Osterman

I think @mumoshu was first with his operator https://github.com/mumoshu/aws-secret-operator

mumoshu/aws-secret-operator

A Kubernetes operator that automatically creates and updates Kubernetes secrets according to what are stored in AWS Secrets Manager. - mumoshu/aws-secret-operator

Erik Osterman

chrism

$0.40 per secret per month jfc, didn’t realise it was that much.

Erik Osterman

Yea it’s odd that they charge so much for it. Don’t get it.

2019-04-10

raehik

Hey all, I’m stuck in the world of IAM and I had a thought about permissions management. It’s nice to split permissions into a user-role structure for management and auditing, but the way role assumption is done feels awkward, and users have to “know” what roles they have available to them. Does AWS provide a method to find out all the assumable roles for a given user?

Alex Siegman

Not natively that I’ve run in to. At a previous gig we used OneLogin I think it was, and the roles you had access to were based on groups from our corp AD, and you saw a list of them when you signed in. That’s the closest I’ve seen, and far from a AWS-based solution. I’d love to be wrong though, but my guess is you’d have to engineer something to provide that info.

raehik

Cool, thanks for the info Alex. I thought as much because of the somewhat arbitrary way role assumption permissions are granted. If using AWS account federation I feel it could be automated with some API calls to look for sts:AssumeRole policies & wondered if it had been done before - probably different for AD and SAML etc. Cheers for the response!

is the fargate cli the closest thing in awsland to https://cloud.google.com/run/?

Cloud Run  |  Google Cloud

Run stateless HTTP containers on a fully managed environment or in your own GKE cluster.

Erik Osterman

Probably

Erik Osterman

btw, there are (2) clis for AWS

Erik Osterman

Also, they’ve started developing this one again: https://github.com/jpignata/fargate

jpignata/fargate

CLI for AWS Fargate. Contribute to jpignata/fargate development by creating an account on GitHub.

Erik Osterman

new maintainer

Igor Rodionov

Does anyone knows how RDS encryption storage works with KMS key that have rotation enabled?

aknysh
Rotating Customer Master Keys - AWS Key Management Service

Learn about automatic and manual rotation of your customer managed customer master keys.

2019-04-09

chrism

@aknysh what are the running costs like? we currently use http://imageresizing.net on iis (m3 mediums) x3 with cloudfront over it as we’re loading images from s3 etal

aknysh


As of the date of publication, the estimated cost for running the Serverless Image Handler for 1 million images processed, 15 GB storage and 50 GB data transfer, with default settings in the US East (N. Virginia) Region is as shown in the table below. This includes estimated charges for Amazon API Gateway, AWS Lambda, Amazon CloudFront, and Amazon S3 storage

aknysh
AWS Service	Total Cost
Amazon API Gateway	$3.50
AWS Lambda	$3.10
Amazon CloudFront	$6.00
Amazon S3	$0.23
chrism

ta; id clicked architecture not clicking the overview was a page

chrism

Anyone experienced * module.elasticache_redis.module.label.data.null_data_source.tags_as_list_of_maps: data.null_data_source.tags_as_list_of_maps: value of 'count' cannot be computed recently starting I ran the module a couple of weeks ago without an issue. Counts from beyond the grave

chrism
Using interpolated values in strings · Issue #44 · cloudposse/terraform-null-label

Hi there, I&#39;ve traced down a problem from one of your other modules to the way that tags are consumed when you use interpolated values in tags. I&#39;m fairly sure that it&#39;s a terraform pro…

chrism

is this only an issue because the enabled flag sets a count on a resource thats only ever 1 but terraform thinks it has an enumerable to work with that it cant

chrism

the answer to that is no lol its in the label code

chrism

tbh the elasticache module only needs a tags + id input; the additional dependency on label seems overkill; more injection > less dependencies

Erik Osterman

Dependency on labels is central to our entire terraform strategy. It ensures composability of modules and consistency. Humans are just not good, nor consistent about naming things. If we are not consistent about it’s usage then we would be breaking that contract. :-)

chrism

aye i mean that if it only needs id + tags then module.label.id module.label.tags to set the variable input seems less fussy.

chrism

of course, the dns parts of debatable use once you enable tls as the tls isnt configurable

2019-04-08

Alex Siegman

So looking at the reference architectures repository, there seems to be two accounts that seem to overlap:

root
The "root" (parent, billing) account creates all child accounts and is where users login.

Of note here is the “where users login.” There’s also an “identity” account:

identity
The "identity" account is where to add users and delegate access to the other accounts

I’m a bit unsure how this would look in reality. I’m not sure how you’d “login” to the root account, if your user is over in a separate account. Am I missing something? I thought in AWS your starting point always had to be wherever your IAM user existed, and from there you can assume roles in whatever fashion is needed.

Erik Osterman

We provide a stub of an identity account

Erik Osterman

but we currently provision all our customers using the root account as the identity account.

Erik Osterman

… in other words, we don’t have a configuration for the “identity” account besides the creation of it.

Alex Siegman

Okay, I’m curious what the future idea of it is then.

Alex Siegman

Like, would I put dev accounts there, and they’d “log in” there and assume roles from there?

that’s how I’d see it yeah. that would mean the default org account role assumption wouldn’t work, and require more specific setup. which isn’t strictly a bad thing, and would save having to -undo- the default setup….

Erik Osterman

yea, rather than stick user accounts or (SSO integrations) in the “root” (payer) account, we’d just provision it in the identity account instead.

Erik Osterman

The difference is just there’s a tad bit more effort in initial setup.

Does anyone have any experience with hosting images and videos that are optimal for each device? Is there an AWS service or an approach that’s better than generating 10 versions of an image and using S3/Cloudfront?

aknysh

we recently deployed https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/welcome.html. It uses http://www.thumbor.org/ to change image size/format/filter on the fly. Behind a CDN works ok and fast enough. It’s CloudFormation, not TF though

Serverless Image Handler - Serverless Image Handler

How to deploy the Serverless Image Handler. AWS CloudFormation templates automate the deployment.

thumbor - open-source smart on-demand image cropping, resizing and filters

Thumbor is a smart imaging service. It enables on-demand crop, resizing and flipping of images. It features a very smart detection of important points in the image for better cropping and resizing, using state-of-the-art face and feature detection algorithms (more on that in Detection Algorithms).

Thank you

Where can I find the CloudFormation template(s)?

aknysh
AWS CloudFormation Template - Serverless Image Handler

AWS CloudFormation template that deploys Serverless Image Handler on the AWS Cloud.

Sorry & thanks

2019-04-05

chrism

There’s numerous things in IAC where you start to think terraform isnt helping Create an aws_acm_certificate , then add an extra SAN. Terraform sits there trying to destroy the old one… but the new ones only just been created; so everythings attached to it, it aint letting go

chrism

They didnt code in to check the api response that its in use so it just retries to death

chrism
Deleting ACM certificate fails with ResourceInUseException by a deleted ELB. · Issue #3866 · terraform-providers/terraform-provider-aws

We are seeing an issue with using acm certificates during terraform destroy where the certificate is still seen as in use by a load balancer that was just deleted. Due to eventually consistent apis…

chrism

Enjoy the weekend chaps

1
1
aknysh


There’s numerous things in IAC where you start to think terraform isnt helping
yeah, the more the current crop of tools evolve, the more I miss using my old circa ‘09 framework that just wrapped the java cli tools……

“the infra is code because we -wrote- code”

Alex Siegman

I guess the idea is that not everybody has to invent that wheel anymore? I dunno. Certainly understand the feeling though

yeah - it’s just that they write the opinions to be so frameworked…er frameworks to be so opinionated… that if you had a worldview that doesnt fit the framework….

thoughts on aurora Postgres? im aware of all the improvements its supposed to give for roughly the same cost over vanilla RDS Postgres but wondering if anyones used it in production and what their thoughts are

aknysh

We use it all the time, it’s very good

aknysh

Synchronous replication with milliseconds latency

aknysh

Many read replicas

nice

and costs? @aknysh

more expensive/same/cheaper than vanilla RDS?

aknysh
When Should I Use Amazon Aurora and When Should I use RDS MySQL?

Now that Database-as-a-service (DBaaS) is in high demand, there is one question regarding AWS services that cannot always be answered easily : When should I use Aurora and when RDS MySQL? DBaaS clo…

aknysh

For production where you need bigger instances, the cost is relatively the same. With plain RDS you can get smaller and cheaper instances, but that are just good for testing and maybe staging

are you using it for postgres or mysql

aknysh

Both

we just talked to the solutions architect that said we can just restore the rds potgres snapshot to aurora and it will just work

aknysh

Yes

did you find that to be true or did aurora change things

nice

aknysh

Aurora changes things mostly in user and permissions management, and some other minor things

mmm

i am using alot of roles/databases in my rds instance

multi-tenant so i wonder if that will change things

aknysh

E.g. even the master user you create is not the admin in the cluster

thanks @aknysh!

i will test it out

aurora’s a bit more fun to codify setup, and to do instance up/downgrades (ie, you have to do each of them yourself. tip: just create new nodes in the cluster at the scale you want, then failover to them…)

if you make the mistake of scaling your write node, it will create an outage because aurora dont care.

oh, and I have a broke psql aurora node right now, it borked in prod, didnt failover. failed the cluster over, the node is still borked. can’t add support to the account because no one can get into the vault in the office where the root acct’s mfa key is.

(effing stupid that whole iam-users-can’t-change-support)

1
aknysh

It’s even worse with regular Postgres and MySQL :)

regular rds used to be a dream to update/scale with maz turned on - punch it and walk away, it’d update the slave, then failover and update the master.

let it trigger in the maint window if you want

2019-04-04

oscarsullivan_old

If your EBS backed EC2 instances are SHUTDOWN (not terminated) do you still pay for the EC2? I understand you’d still pay for EBS.

oscarsullivan_old

EC2 instances accrue charges only while they're running

oscarsullivan_old

I think I read this before

oscarsullivan_old

And made me wonder if they meant running as in OS RUNNING / POWER ON

oscarsullivan_old

or running as in… it is created and say SHUTDOWN

When an EC2 instance is stopped you only pay for the EBS volumes you are using.

oscarsullivan_old

Thanks guys, that’s perfect then. No reason not to switch off the EC2s over night!

oscarsullivan_old

… The IPs wouldn’t change, right?

They will when you’re not using elastic ip’s

oscarsullivan_old

Hmmm, so on spin-up would need to run terraform to update the R53 records. Alright that will take a bit more effort

chrism

when teleport provide a full script with make file for terraform, but you wish it was just a provider because you only need an 8th of what they’ve shoved in it https://github.com/gravitational/teleport/tree/master/examples/aws/terraform

gravitational/teleport

Privileged access management for elastic infrastructure. - gravitational/teleport

chrism

The UI stuffs nice for teleport but I think it’s going to give me a stroke. We took SSH proxying via a gateway and added 300 new things that can break

chrism

default size for the SSH proxy in that things a m4.large; 8gig of ram dual-core

chrism

“sets up the bastion using route53 and registers a cert with letsencrypt” … so it sticks the route to your bastion into public CT logs (using wildcards is better) and theres no config in that thing to restrict traffic to the bastion sooo we’re doing public ssh now

chrism

all just their example of course; you can do what you like in reality. Wish it was just an ansible module (only one around hasnt been touched in 11 months); at least it doesnt cripple normal ssh

chrism

so you have a backup when it facepalms

chrism

@Erik Osterman with the cloudposse bastion what’s the deal with users. how can it say which user logged in if the volume mounts against 1 user (just going off the readme)

Erik Osterman

@chrism user management is outside of the scope of the project. There are dozens of ways to provision users with the #bastion

chrism

Yeah I mean if i have 9 users already there by other means how does the bastion map to those 9 users if its using volumes?

Erik Osterman

A bastion is a jump box

chrism

I know

Erik Osterman

Users should not be hanging out on it

Erik Osterman

:-)

chrism

lol no, but if I create jim, jane, alice with their own auth pubkeys do I have to map the bastion for all 3 users

Erik Osterman

See GitHub authorized keys project for inspiration

Erik Osterman

There’s a gist in the GitHub issues to how someone else did it with cloud formation

chrism

in comparison; if you setup teleport you add N users; its not mapping 1 auth key; it knows jims keys are x etal

Erik Osterman

Teleport handles SSO

Erik Osterman

Teleport is what we use :-)

chrism

Aye i’m sorta leaning that way at the moment; just not keen on the fat

chrism

feels like using a juggernaut to deliver a box of matches

Erik Osterman

We have open sourced our implementation of teleport with kops

Erik Osterman

Oh yes! It’s totally a hack job to use anything else but teleport

Erik Osterman

Plus with teleport you get easy YouTube style replays

chrism

it has lots of selling points; other than having to rejig the world

chrism

I assume you can do tedious stuff like create groups with it and say teamx can only access teamx’s machines

Erik Osterman

Yup

Erik Osterman

You can have groups

Erik Osterman

Teleport is beautiful. Inside and out.

Erik Osterman

It is a beast to setup the first time. All in all we have spent probably more than 2 months of man hours on it .

chrism

Wonder how many headaches getting our “the world is cisco” folks heads around that will be

chrism

I just used https://github.com/woohgit/ansible-role-teleport up front; slightly put off for the vsphere land as we use RoyalTS

1
Erik Osterman

Are you using dynamodb, IAM roles, s3 backend storage and SAML?

chrism

teleport would solve lots of the niggly “everything everywhere needs auditing” alongside the “if the grunts dont have ssh access to things I’ll have to debug everything”

Erik Osterman

For teleport auth and node?

chrism

no; I was using SSH proxy commands in a shell script and a bastion host as the lord-god-aws defined on the mount

chrism

The ansible script just uses tokens

chrism

I’ve setup a proxy/auth on the existing throwaway bastion to test it

chrism

and shoving a node on another box

chrism

Their repo’s terraform setup for aws consists of graphana/influx monitoring / dynamo/s3

chrism

our machines are supposed to be immutable; so the only reason you’d ssh in is to grab a log thats not already being exported or to diagnose an issue before burning the machine to the ground

chrism

We use RKE rather than KOPS

chrism

which will be more pleasant when ranchers finished its v2 terraform provider that seems to wrap rke + the cli

Erik Osterman

I am just implying that to do teleport the “right way” will most likely be a lot more work. While getting a POC up takes a day or two. :-)

chrism

yeah; tbh I just wanted to see how easy it was to throw up on the minimum settings. Nothing comes free

chrism

the base setup using tokens isn’t too bad; realistically as a 1/3 of our setup isn’t kubernetes and everything sits in ASGs long-life tokens are probably more necessity than a nicety

chrism

The rancher hardening guides if they’re of interest to anyone https://releases.rancher.com/documents/security/latest/Rancher_Hardening_Guide.pdf

1
chrism

sigh; im sold on teleport. Now to read everything

chrism
skyscrapers/terraform-teleport

Terraform module to provision Teleport related resources - skyscrapers/terraform-teleport

Erik Osterman
cloudposse/terraform-aws-teleport-storage

Gravitational Teleport backing services (S3, DynamoDB) - cloudposse/terraform-aws-teleport-storage

Erik Osterman

We also have the Helmfiles

phanindra bolla

How do i deploy AWS ASG ec2 through terraform as a blue green deployment . i am thinking about diff types of methods

  1. Create a Launch template which update/ creates new ASG ,new ALB/ELB and switch the R53 domain to new
  2. Create a new Launch template ,, ASG and ALB and update and target ALB to existing R53

please suggest me best way

2019-04-03

oscarsullivan_old

If your EBS backed EC2 instances are SHUTDOWN (not terminated) do you still pay for the EC2? I understand you’d still pay for EBS.

oscarsullivan_old

I want to turn off EC2s at night for non-core environments, using Lambda.

Maxim Tishchenko

hello everyone, is there any way to invoke lambda func, when add/remove user into AWS account ?

aknysh

@Maxim Tishchenko you can log user creation/deletion events to CloudTrail https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html

aknysh
Using AWS Lambda with AWS CloudTrail - AWS Lambda

How to set up and start using the AWS Lambda service.

aknysh

which will filter the required events

aknysh
Automate account creation, and resource provisioning using AWS Service Catalog, AWS Organizations, and AWS Lambda | Amazon Web Services

As an organization expands its use of AWS services, there is often a conversation about the need to create multiple AWS accounts to ensure separation of business processes or for security, compliance, and billing. Many of the customers we work with use separate AWS accounts for each business unit so they can meet the different […]

aknysh
alphagov/lambda-check-cloudtrail

Periodic Lambda function to alert when CloudTrail is not being delivered to an S3 bucket - alphagov/lambda-check-cloudtrail

1
Erik Osterman

@lvh

alphagov/lambda-check-cloudtrail

Periodic Lambda function to alert when CloudTrail is not being delivered to an S3 bucket - alphagov/lambda-check-cloudtrail

1
oscarsullivan_old

Big up UK govt dev team ^

lvh
08:14:25 PM

@lvh has joined the channel

2019-04-02

2019-04-01

2019-03-31

rohit

Is there any advantage in using S3 transfer acceleration if i am already using cloudfront to serve s3 files ?

Nikola Velkovski

I don;t think so.

Erik Osterman

probably more so if you’re uploading large files from around the world

Nikola Velkovski

The only improvement I would see in this case would be when there’s a cache miss and CF has to pull in the file from s3 ( the origin )

rohit

makes sense

2019-03-30

rohit

I don’t understand the concept behind why we need to enable backups in order to use read replicas for aws rds?

rohit

can anyone help me with this ?

Tim Malone

Turn off backups, on the replicas? I think that depends on the engine version you’re using. Not supported on MySQL 5.5 IIRC, but is on MySQL 5.6, for instance. Is that what you meant?

rohit

@Tim Malone I meant to ask - why do i have to enable backups in order to use read replicas ?

2019-03-28

oscarsullivan_old

Thanks! I couldn’t sleep last night because all I could think about was K8s vs fargate Vs not using either and instead orchestrating with ansible/ ecs/ lambda

1
Nikola Velkovski

If you think this way then, you can also do it with ec2, and docker commands in cloud-init as well, but the increase in complexity and tech debt will outgrow any standardized solution.

1
oscarsullivan_old

I think really it’ll just boil down to EKS vs Fargate.

1
Nikola Velkovski

Ansible, do you idempotence ?

oscarsullivan_old

here’s a pic from AWS just the other day

Abel Luck

i saw in the sweetops docs somewhere that if you create an account when adding to an org (versus creating it independtly then importing it to the org) you can’t ever spin out that account if necessary

Abel Luck

is that still the case?

Abel Luck

cause the new reference architecture impl seems to auto provision the sub-accounts rather

Tim Malone

you can spin it out of the org, but you’ll usually have to do some extra config first

oscarsullivan_old

Yeh it’s a real pain

oscarsullivan_old

I created a few too many org accounts

oscarsullivan_old

And I CBA to log in and configure them to be independent so I can detatch

Tim Malone


When you create an account in an organization using the AWS Organizations console, API, or AWS CLI commands, all the information that is required of standalone accounts is not automatically collected. For each account that you want to make standalone, you must accept the AWS Customer Agreement, choose a support plan, provide and verify the required contact information, and provide a current payment method.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html

Abel Luck

yea i’ve been trying to decide how finely to split stacks among accounts

Abel Luck

we have multiple deployments of our main stack for different customers, and each deployment needs its own dev/staging/prod

New – Advanced Request Routing for AWS Application Load Balancers | Amazon Web Services

AWS Application Load Balancers have been around since the summer of 2016! They support content-based routing, work well for serverless & container-based applications, and are highly scalable. Many AWS customers are using the existing host and path-based routing to power their HTTP and HTTPS applications, while also taking advantage of other ALB features such as […]

Erik Osterman
Amazon’s AWS Deep Learning Containers simplify AI app development

Amazon’s Deep Learning Containers support popular deep learning frameworks including Google’s TensorFlow and Apache MXNet.

2019-03-27

chrism

RKE (ranchers version of KOPS) added a cluster.rkestate file output after its ran… yeeeeey more state to move around.

Erik Osterman

It doesn’t support remote state?

Erik Osterman

Kops has a state bucket

chrism

no; had to jimmy-rig s3 pull/push in

chrism

which is shit

chrism

but functionaly

chrism

lol I keep getting told off by the anti-swearing bot

Erik Osterman

Wow surprised that would be the case. Not very team friendly.

Erik Osterman

Checkout goofys

Erik Osterman

Also we have support for that in geodesic

Erik Osterman

Mount s3 as a filesystem

chrism

currently trying to decide if its best to packer build an nginx ami, then packer the configs in using that as a base, or sync the configs in on userscript certainly know which would be quicker to update

chrism

goofys looks pretty neat; its always reassuring when somethings written in GO as I dont have to spend 20 minutes looking for the “wont work on x system” crap

Erik Osterman

Yea it’s a big qualifier for me

chrism

Never blindly apply CIS benchmark changes Nothing like spending 2 hours wondering why you’re K8 deployment has broken only to discover it disabled ipv4 forwarding … docker kinda needs that

oscarsullivan_old


its always reassuring when somethings written in GO
Why? Because you know GO or because of a special trait of GO?

chrism

Because the code compiles into a single binary per OS / tends to be agnostic

1
chrism

whereas try faffing with Python + Matlab on windows Or anything node on windows

chrism

its nice to be able to drop a binary and run

chrism

without having to install a crap load of dependencies

1
chrism

When C# can do the same I may change my mind due to familiarity alone

chrism

loyalties are fleeting

Erik Osterman

fargate made easy

Erik Osterman

@oscarsullivan_old you might dig this

2019-03-26

oscarsullivan_old

Anyone attend AWSome day today? Ric Harvey was really good

Alex Siegman

What is this AWSome you speak of?

1

2019-03-25

Is there a way to use a custom certificate with ElastiCache Redis for in-transit encryption? I can’t seem to find a way.

Erik Osterman

Hrmmm good question

Erik Osterman

is there no way to specify the specific ACM cert to use?

Erik Osterman

(haven’t looked)

Doesn’t seem to be. “You don’t have to manage the lifecycle of your certificates because ElastiCache for Redis automatically manages the issuance, renewal, and expiration of your certificates.” Sounds like I’m being ungrateful.

Erik Osterman

Lol

Unfortunately, this means I can’t create a standardized hostname for the Redis cluster

Erik Osterman

Crap, you’re right. That sucks!

Erik Osterman

Guess we have been using the canonical cname

I tried using a CNAME, but it doesn’t seem to work with Tls enabled

Which makes sense given that the certificate subject doesn’t match

Erik Osterman

It’s gotta be the hostname returned by redis

Erik Osterman

Nothing you generate

1

Yeah, exactly

2019-03-22

antonbabenko

Guys, question about EC2 ENI limits which is related to ECS clusters. I have 2 EC2 instances, where each has 2 ENIs on it, which means I can launch just 2 containers there (right?). My tasks are rather lightweight, so I have a lot of unused resources but need to scale-out EC2 instances in the cluster because I need more tasks/containers running. I want to be able to run 10 small tasks on a single EC2 instance t3.large (for eg).

Are there better ways to utilise resources and have more ENIs available? I have been evaluating bigger instances also, but there are not so many ENIs comparing to amount of resources.

Can Fargate be a better option in term of price to utilise just what I need and get ENIs allocated as requested?

/cc @

ENI limit is only applicable when you are using ECS tasks in awsvpc mode, normally you would use bridged mode with dynamic port allocation.

1
antonbabenko

ok, let me read more about that one. Thanks!

chrism

If you create a module with a script in it how do you load the module script in the module? template = "${file("./scripts/userscript.sh")}" Loads based on the working path rather than the module.

chrism

${path.module}

1
chrism

one of those rtfm moments (aka rtfm find m useless go to github find other peoples shit)

1

2019-03-21

Has anyone experienced a delay between when you have an issued certificate in ACM (passed DNS validation and in us-east-1), and when it becomes available for use within CloudFront via the console?

roco
08:17:05 PM

I’m creating a new CF distro, and have 2 available, issued certs in ACM in us-east-1. In CloudFront, the option to choose “Custom SSL Certificate” is not available

aknysh

is that CF distro in us-east-1 as well?

CF Distro is global, not associated to a specific region

ooh nvm – i see them now. Looks like there’s a bit of a lag between when the certificate is validated/issued within ACM and when its available for use with other AWS resources (at least CloudFront distros)

1
Erik Osterman

it used to be at least that ACM certs were required to exist in us-east-1

Erik Osterman

though I think they recently lightened that restriction

Erik Osterman

(for CF distros)

yeah i made sure the ACM certs were in that region… i didn’t change anything, just reopened the new Distro console and finally the Custom SSL Certificate radio button became available. but there was at least a 20m lag between when ACM showed it issused and when i could associate to CF Distro

anyway, thanks for the attention haha

Erik Osterman

cool, thanks for the update

Tim Malone

that lag seems unusual - but might’ve been a temporary/isolated thing

i do hope so! i’m curious to see if anyone else had experienced this…

2019-03-20

aknysh
3
Pablo Costa

Would be nice if they allowed me to insert a custom URL for 509 certificate, then I could choose my own cluster URL for endpoint

3
Pablo Costa

PS. I’m using kubectl through vpn to access the eks endpoint, but I configured my dns to only query vpc resolver for my internal domain, which makes difficult to resolve the cluster endpoint.

3

2019-03-19

2019-03-18

oscarsullivan_old

Any thoughts on why after having setup an openvpn instance (that does change my IP, confirmed), I still can’t use private IPs from my local to ssh into other machines?

mmuehlberger

Can you reach the machines in any way, like pinging them?

aknysh

security groups?

oscarsullivan_old


aknysh [4:28 PM]
security groups?

Bingo

oscarsullivan_old

yep that’s got to be it

oscarsullivan_old

or not

oscarsullivan_old

ingress 22 for my public IP

mmuehlberger

If you VPN you should have a private IP that needs access.

aknysh

open the SG for all traffic and test if you can access it

oscarsullivan_old

ooo what the heck

oscarsullivan_old

that worked @aknysh

mmuehlberger

Usually OpenVPN will put you in a subnet and you can give the all subnet IPs SSH access for your machines.

oscarsullivan_old

I did ALL UDP and ALL TCP from anywhere

oscarsullivan_old

~ah~

aknysh

yea, then @mmuehlberger is correct, VPN uses you private local IP, open the SG for it

oscarsullivan_old

Why would it use my local private IP to ssh

oscarsullivan_old

surely that will change all the time and I can’t possibly open a SG rule for it?

mmuehlberger

It uses your private IP in the VPC, that you get after connecting via VPN.

oscarsullivan_old

Let me try putting the machine in a public VPC

oscarsullivan_old

Ah it already is

oscarsullivan_old

Oh wait not what you’re saying

oscarsullivan_old

Ah so If I allow the CIDR for the private IP of the subnet which I’m tunnelling into…..

aknysh

does your VPN have its own SG?

oscarsullivan_old

Yes

aknysh

then add it to the other SG

oscarsullivan_old

I’m unsure that would work across accounts

aknysh

across accounts you have just a few choices I guess: in the bastion SG, open a hardcoded IP or CIDR from the VPC (not good), or do VPC peering; maybe there are other solutions?

oscarsullivan_old

I have got VPC peering active weirdly

oscarsullivan_old

I have:

MGMT: VPC 1 containing VPN

Sandbox: VPC 2 peered t oVPC 1 containing anything

aknysh

oh, then adding the VPC SG to ingress for the bastion SG should work?

oscarsullivan_old

Oh right, didn’t realise I could reference a SG from another account - hadn’t tried

aknysh

or, you know the CIDR of VPC 1, add it to ingress of VPC 2

oscarsullivan_old

Hmmm easier for me to do the CIDR I suppose

oscarsullivan_old

because I manually set them with terraform

oscarsullivan_old

so I know what they’ll be

oscarsullivan_old

(to avoid overlapping when peering)

aknysh

(don’t remember if SGs accross accounts work with VPC peering)

oscarsullivan_old

(and make it easier to tell things based on IP CIDR)

oscarsullivan_old

ok cool so sounds like CIDR is aactually the best route

aknysh

if you know them and have peering, then yes

oscarsullivan_old

ooooh yeh nice.

Removed the wildcard ingress rules and allowed the CIDR of my VPC

oscarsullivan_old

aknysh

Nice

oscarsullivan_old

And cross account works!

oscarsullivan_old

amazing

mmuehlberger

Great!

chrism

anyone know of any magic ways to get Ubuntu 18lts to fuck off caching DNS

chrism

or more concisely stopping it from seemingly caching things with short ttls for all of existence

chrism

nvm only on one box… at least its 5pm

Erik Osterman

is it running nscd?

chrism

sysd resolver; the aws dns in the 3rd zone was caching, I cheated then checked this morning and it resolved. Sodding TTLs

rohit

Is there a way to log complete request body at loadbalancer level or VPC (AWS ELB/VPC) ?

@rohit Why do you need to log it at the loadbalancer, and not at the server/lambda? Do you suspect the LB modifies the request somehow?

rohit

nope, i don’t see the request body when it reaches nginx

rohit

so was wondering if there is a way to log the complete request body at loadbalancer

I don’t think so. I am surprised you don’t have that capability within nginx

rohit

I mean there is a way to do it in nginx but i want to log the entire request at the loadbalancer before it reaches my app

I guess you could put something in front of the LB like a WAF or CloudFlare workers

I am sure there is a way to do it, but I don’t think AWS ELB has this logging ability

Erik Osterman

Or use an NLB

Erik Osterman

then you’ll see the full unadulterated request body at your app

rohit

thanks @Erik Osterman Will check if i can use NLB

2019-03-15

aknysh
New – Open Distro for Elasticsearch | Amazon Web Services

Elasticsearch is a distributed, document-oriented search and analytics engine. It supports structured and unstructured queries, and does not require a schema to be defined ahead of time. Elasticsearch can be used as a search engine, and is often used for web-scale log analytics, real-time application monitoring, and clickstream analytics. Originally launched as a true open […]

chrism

I hope this leads to some nice alternative tooling for handling auth etal from x-pack for those who just want a bloody search engine

chrism

hope

chrism

the current stuff aws have thrown up is a bit meh though

chrism

bad the perf tool

chrism

i dooooo like that ascii graphing

2019-03-13

Alex Siegman

So in testing reference-architectures stuff, I made a few accounts I don’t want anymore. I went in and closed the accounts but they are still in my org as “suspended” I’ve worked on hundreds of AWS accounts, but I realized today I’ve never closed one. Any clue on if those will eventually go away in my AWS Org?

Alex Siegman

Turns out, you have to talk to support, reinstate the account, do all the steps to make it a standalone account, remove it from the org, then close it.

2019-03-11

2019-03-09

Erik Osterman

It took us about a week as well

Erik Osterman

If you have business support it can be expedited

2019-03-08

what causes random spikes in read/write ipos on rds databases? where can i look to debug

Erik Osterman

I think RDS snapshots will influence that

hmm no snapshot at the time of the spike

has anyone here migrated hosted zones between aws accounts before?

I followed this guide to the tee, and it seems to have worked (running nslookup/dig shows the new nameservers) https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-migrating.html

Migrating a Hosted Zone to a Different AWS Account - Amazon Route 53

Migrate a hosted zone from one AWS account to another using the AWS CLI.

ive left the old hosted zone up while the switchover occurs (its supposed to take up to two days because of dns resolver caching) and its been 7 days now.

but it seems like i am getting err_name_not_resolved browser errors. this is happening extremely (emphasis on extremely) rarely but i was wondering is the hosted zone cutover not a completely clean and error prone process?

Erik Osterman

does the NS delegation look good?

Erik Osterman

dig +trace will help you follow the query path

Yes @Erik Osterman

dig +trace gives me what I’m expecting

oscarsullivan_old

Could always open an aws ticket. They take about a week

2019-03-04

Anyone who used ACM’s “private certificate authority (CA) ” for having a CA infra out of the box, for use with Kafka for example,

oscarsullivan_old

I didn’t use it but I saved the terraform for setting up an ACM

oscarsullivan_old

do you want??

Ah that’s cool, yes please.

oscarsullivan_old
12:41:31 PM
oscarsullivan_old

It’s been about a month, but I remember thinking “I should save that if I’m not going to use it”.. so I don’t think it’s just the default example lol

Yeah it’s quite expensive

oscarsullivan_old

It had nothing to do with cost for me.. we just manage domains weirdly at my place and have yet to move CA and domain control to AWS

2019-03-01

chrism

Fricking safety gloves … On a command line Like people will pop it open and accidentally blow windows up

Are you really doubting that?

People bypass any warnings you put to them, even when it says “this will delete your system”, and once its deleted, they write on reddit/twitter how bad Windows is, as its so easily deleted

chrism

They can type sudo rm -Rf /

chrism

you can’t cure stupid

oscarsullivan_old


you can’t cure stupid
No but you can safety-net it

chrism

The darwinian effect of letting the stupid rid the world of themselves is fine with me http://www.weirduniverse.net/blog/comments/tullock_spike

The Tullock Spike

The economic theory of risk compensation suggests that laws intended to increase safety, such as mandating safety belts in cars, can sometimes have

chrism

Realistically though if you broke your WSL install, you can just remove + reinstall it. It’s supposed to be an app

oscarsullivan_old

I’ve figured it out… watch this space https://github.com/osulli/aws-multi-account-setup

osulli/aws-multi-account-setup

A guide to getting multiple AWS accounts linked in an orgainsation and sharing relevant resources with the end goal of using Terraform against different accounts for different stages. - osulli/aws-…

oscarsullivan_old

Ok.. published! Would appreciate someone suggesting a good way around the limitation listed

oscarsullivan_old

Does anyone know how to use aws-vault login x with SSO / Federation? There’s clearly some sort of support in https://github.com/99designs/aws-vault/blob/master/cli/login.go but I can’t work out what config I’m missing in ~/.aws/config… I think it’s missing session token from the SSO portal?

99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

Erik Osterman

aws-vault does not support SSO

1

thanks for the answer

1

I asked the same thing some days ago, and this confirms my suspicion

1
Erik Osterman

for that, you need a purpose built tool

Erik Osterman

e.g. aws-okta for okta (by segmentio)

Erik Osterman

ther are others for gsuite, etc

oscarsullivan_old

I’m only using AWS SSO

Erik Osterman

Hrm…. I haven’t searched for a cognito cli for aws.

Erik Osterman

Let me know if you come across one.

Erik Osterman

Ideally, a self contained binary

oscarsullivan_old

I’m going to create a new root account that my company isn’t currently using and try reference architecture. Feel like all my problems stem from avoiding it!

If I am only using the master node for Redis in my application, is there any advantage to having more than 1 replica in ElastiCache cluster?

2019-02-28

oscarsullivan_old

AWS Organizations and AWS SSO setup guide here: https://github.com/osulli/aws-sso-setup

osulli/aws-sso-setup

A guide on how to setup AWS Organizations and AWS SSO and an example permissions matrix. - osulli/aws-sso-setup

Do you know if this works with things like aws-vault?

osulli/aws-sso-setup

A guide on how to setup AWS Organizations and AWS SSO and an example permissions matrix. - osulli/aws-sso-setup

oscarsullivan_old

@ I don’t really see why not. The only trouble is if you create a resource on one account you have to create policies to share it. So for instance, I can share my S3 bucket storing my state files to all my accounts reasonably easily… but my dynamodb that contains the lock hash… not proving so easy to allow access from other accounts!

Well im not sure, since I could not find any docs to get SSO to work with aws cli even, with assume roles or something

chrism

If you setup WSL via the store (as they say you should now) geodesic fails to map the path It assumes its still in the local/lxss when in reality its in AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc

Erik Osterman
WSL Bugs · Issue #379 · cloudposse/geodesic

what Not working on some WSL environments Here is a list of the changes I had to make to get root.cloud.posse script to run on WSL: In root.*.com: DOCKER_NAME was using $NAME environment variable. …

Erik Osterman

Does this help?

chrism

Guess this needs to be smarter

chrism
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

chrism

joys of changing

chrism

infact

chrism

simpler

chrism

the if/else just needs reversing; check for canonical … but i imagine that might have unexpected consiquences

chrism

or msft need to stop being random af

chrism

chrism

I’m taking the lazy way out and just blowing the folder away seeing ms’ command for removing lxss full didn’t actually do what anyone of reasonably sound mind would call “FULLY remove”

chrism

kinda stuff that makes you think ubuntu desktop maybe the future; then you remember how much that blows

chrism

AWS v2 providers out

chrism
terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

chrism

aka as good a time as any to add that version = “1.60.0” to your provider

Erik Osterman

@chrism any chance we could get your help improving that path extrapolation? I’d be happy to jump on a call

chrism

Think it might be worth a ticket on WSL to add some sort of inbuilt var of winpath

chrism

He asks, and the lord microsoft giveth

chrism

already exists

chrism

wslpath -wa . in WSL returns the windows path to the current folder

chrism
MicrosoftDocs/WSL

Source code behind the Windows Subsystem for Linux documentation. - MicrosoftDocs/WSL

chrism

Been around a while too

chrism

I shall give that a stab

chrism
wslpath command does not return the windows path for home directory · Issue #3146 · Microsoft/WSL

$ wslpath -w ~ wslpath: /path/to/home: Result not representable Is this intended behavior? I expect following result: $ wslpath -w ~ C:\Users\mkt\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu…

Erik Osterman

Hah! Damn… looked like a good option

Erik Osterman
Erik Osterman

Just windows being windows. Getting in the way.

2019-02-27

Erik Osterman

has anyone seen something to enforce arbitrary limits on the number of specific kinds of resources which match a given tag?

Erik Osterman

basically a policy enforcement engine for aws which would let us say untagged instances are automatically destroyed

Erik Osterman

if the count of instances with tag Developer=foobar, is greater than X, then alert

Erik Osterman

if the count of instances with tag Developer=baz is greater than Y, then kill until less than Z

Erik Osterman
RiotGames/cloud-inquisitor

Enforce ownership and data security within AWS. Contribute to RiotGames/cloud-inquisitor development by creating an account on GitHub.

2019-02-21

Erik Osterman
AWS Firecracker: 10 things every tech pro should know

AWS Firecracker is tiny, efficient, fast, and might redefine the virtual machine. Here’s what you need to know about this AWS product.

I have been getting a lot of We currently do not have sufficient {instance-type} capacity in the Availability Zone you requested {zone}. messages in EC2. Is this common? Does AWS address these capacity issues?

Erik Osterman

this is where RIs help

Erik Osterman

…as they provide capacity reservations

Erik Osterman

@ can’t say we’ve been seeing it lately, but it highly depends on (a) the region you’re operating in (b) the type of instance

Erik Osterman

for example, we’ve seen this in the past when AWS has had a regional zone failure in a region and everyone auto scales out to the other zones

So I need to purchase RIs for the specific AZ?

Erik Osterman

yes, the “capacity” reservations are tied to the AZ, but the cost savings span all AZs

Erik Osterman

though AWS has been revamping this, so maybe it’s easier now?

At least that’s something

@Erik Osterman Do you know if there is a way to attach the RIs to an ASG? I purchased a couple, but they seem to immediately be used up on existing running instances.. which makes sense in retrospect

Erik Osterman

RIs are not a directly addressable resource from the EC2 perspective

Erik Osterman

it’s a billing instrument

Yeah, but when it comes to capacity

I want PROD instances to take priority over non-PROD, if that makes sense

Erik Osterman

but they are not running in the same account, right? so that won’t happen

Erik Osterman

i am not sure how to prioritize RI capacity reservations within an account

It’s a legacy pre-SweetOps account

1

I am locked for some customers due to not being able to move around NAT elastic IPs, and whitelisting changes are a pain

#HarshRealityOps ?

Erik Osterman

yea, there are in the end always some things outside of our control

Erik Osterman

the NAT IP thing comes up regularly

2019-02-19

chrism

When you waste time writing code around aws config … then look at the pricing.

joshmyers

@chrism Wrote https://github.com/alphagov/pay-aws-compliance a while back to help with auditing of AWS resources (and some others) - run as a scheduled Lambda - cheap!

alphagov/pay-aws-compliance

Contribute to alphagov/pay-aws-compliance development by creating an account on GitHub.

1
chrism
toniblyx/prowler

AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool. It follows guidelines of the CIS Amazon Web Services Foundations Benchmark and DOZENS of additional checks …

3
chrism

which is pretty nice

chrism

I’ll add the alpha gov one to my list

joshmyers

Prowler looks pretty full featured, but bash!

chrism

Well we run it via teamcity within docker and output the report as an artifact

1
chrism
12:21:32 PM

plops it in a report tab; though I need to look at the background colour of the html at some point.

joshmyers

Nice

joshmyers

Can hook up Lambdas to SNS to send email reports etc too

chrism

cuts down on the drudgery

chrism

It’s a nice tool

joshmyers

Looks good

chrism

Drowning in things analysing aws; though Im yet to get anything of value from GuardDuty beyond 3rd parties appreciating it as a tick box

joshmyers

Never looked at GuardDuty

joshmyers

It’s impossible to keep up with all the AWS services hah

chrism

they certainly dont miss a trick; just wish they’d have a chat with other areas of aws before building them

chrism

annoying levels of disconnect between products; you set your multi region cloudtrail… and they bring out guard duty that has to be enabled in each regions. Oh and you can hook it up to another account so you can read from your root account the guard duty events of prod/testing etc…. but you have to do it per region, per account via invites

chrism

… one of those moments you terraform the creation and realise you’re going to have to go hand-ball all the invite acceptance

chrism

They almost got it right, but then didnt

chrism

Pretty cheap though compared to most aws things so it has that going for it

chrism
12:29:42 PM
chrism

meh think i’ll just configure the hell out of existing scanners. AWS configs one of those things that looks like it’ll drive me up the f’ing wall rather than save me time

chrism

I was looking at https://github.com/cloudposse/terraform-aws-cloudwatch-flow-logs but it doesnt seem to fit with the ref-architecture way of splitting audit off / storing to s3. its just flowlog>kinesis>CW

cloudposse/terraform-aws-cloudwatch-flow-logs

Terraform module for enabling flow logs for vpc and subnets. - cloudposse/terraform-aws-cloudwatch-flow-logs

chrism

its kinda either or; I may be overthinking this

joshmyers

Not sure on that one tbh, but have you looked at flowlogs or thought about what you wanna do with them?

joshmyers

Have used mainly as a checkbox exercise before

joshmyers

Creating/managing alerts/dashboards with them is quite expensive in terms of mgmt cost

chrism

Yeah tbh the idea seems utterly pointless for the most point

chrism

tickboxes for everyone

1
chrism

Lets put it this way, if you’re having to dig into tb’s of flow logs to find evidence of something; you’ve missed a bigger step elsewhere

joshmyers

How else are you going to find out how much data was exfiltrated from your environment for realz?!

chrism
12:51:04 PM
chrism

nothing like a terraform-github safari to find how other people dealt with stuff, and realising you’ve been here before

joshmyers

lol

chrism

reaches for the star already starred

joshmyers

seriously though, storing flowlogs in something like $LOGGING_SAAAS_PROVIDER generates so much data, is not cheap to do anything proactive with the data

joshmyers

cloudwatch would be cheaper but limited

chrism

More worrying is when our SIEM tool hooks on, and reprocesses the same data

chrism

You know your siem monitoring is worth the cash when you terraform up 30 machines in vsphere; tear them all down and they alert you a day later

1
chrism

Well worth the email notification at 7am on a saturday

chrism

GuardDuty will also look for compromised EC2 instances talking to malicious entities or services, data exfiltration attempts, and instances that are mining cryptocurrency.

chrism

well at least guard duty covers the ex-filtration side; though id be interested to know how that works if its someones shitty app code being abused to haul data out

chrism

my feeling is it wouldnt notice

1
joshmyers

Bet it won’t pick up exfiltrating to an attacker owned S3 bucket

chrism

“its in aws, everything is awesome”

chrism

best place to pull data to; aws would take an age to lock you out and the transfers that fast you’ve less chance of the owner locking you out before you get it

joshmyers

Need to watch out for that too if using S3 endpoints and aren’t locking down your buckets it bypasses any egress proxies you may have in place

joshmyers

Found that out during a pentest :D

chrism

we only really have public stuff in buckets; outside of the whole snowplow usage and thats all anonomised noise

chrism

keeping data in SAAS, aka flick switch for encryption, flick switch for firewall, flick switch for logs

1
chrism

after years of terraform its nice at times to be able to keep shit pretty secure and not have to spend days tweaking bollocks to get there

chrism

pretty secure, as nothing is totally secure

chrism

I need to look at s3 bucket security again as they love to change things

chrism

The UI change was nice, more explicit options to not allow buckets to become public by accident

chrism

Don’t wholly understand why they’re terrified of making it default-on

chrism

the IAM stuff around S3 is pretty powerful; ridiculously so compared to Azures

joshmyers

yeah but too many folks left things open which leads to breaches

chrism

I mean why they don’t default it to most secure and make people open it up. They’ve made the UI better but it wont stop people doing dumb shit

chrism

I mean I know it wont I’m still sat skimming tls updates for bucket names

joshmyers

ah, hadn’t spent too much time poking in the UI recently

chrism

our original account is old as f’; so where no terraform lives, one has to spelunk in aws ui.

chrism

Compared to azure portal though everythings a dream

joshmyers

hah, so I’ve heard

chrism

I imagine its where the Windows 8 UI creator ended up as punishment

1
chrism

terraform and azure sounded like a dream come true then I used it in anger for a month lots of anger lots and lots

chrism

3 machine cluster 22minutes… still not complete TIMES THE HELL OUT

chrism

wasn’t very impressive

2019-02-18

Maciek Strömich

do you observe RDS hostname resolution issues in us-east-1?

2019-02-14

Abel Luck

The cloudfront docs on which types of certs are valid is very confusing It seems you can use an ECC cert between cloudfront <—> origin but only an RSA cert between viewer <—> cloudfront but i can’t find this explicitly stated anywhere

Erik Osterman

hrm…. not something we’ve run in to

Erik Osterman

(that said, we have some modules for cloudfront and acm)

Erik Osterman
05:17:41 AM

@Erik Osterman set the channel purpose: Discussion related to Amazon Web Services (AWS) Archive: https://archive.sweetops.com/aws/

2019-02-13

Maciek Strömich

Hey, did anyone upgrade dc1 to dc2 redshift clusters using Cloudformation? I know there are 2 paths using snapshot-restore and elastic resizing but I don’t want to cause a drift from CF state doing it manually and I wonder if CF does handle the upgrade in a data-loss-less way

Maciek Strömich

FYI if anyone is interested upon dc1 to dc2 redshift upgrade via cloudformation data is being migrated. the only thing to remember is that during the migration cluster is available only in read only mode

Erik Osterman
AWS Limit Monitor – AWS Answers

Automatically monitor your AWS service usage and receive notifications as you approach limits.

Nikola Velkovski

Morning People, anyone can help me with how ECS decides to place the tasks when multiple ordered_placement_strategy blocks are used? Given the following example: I’ve 4 tasks ( dockers ) and 2 machines in 2 different AZs

ordered_placement_strategy {
    type  = "spread"
    field = "attribute:ecs.availability-zone"
  }

  ordered_placement_strategy {
    field = "memory"
    type  = "binpack"
  }

Will ECS stop bringing tasks up because it won’t be able to do so because the first placement block says different AZ or something entirely different will happen? Anyone got any ideas ?

Are they 4 tasks of 1 service? could you explain a bit more

Nikola Velkovski

SO

  • 1 service
  • 4 tasks
  • 2 instances

And it places 2 and stops?

Or you are just asking?

If asking, then it should put 2 Tasks per Instance

Nikola Velkovski

damn this slack doesn’t notify me when I get replies from a thread…

Nikola Velkovski

Ok so the situation is even weirder because we are using awsvpc mode and it has restrictions on ENIS…

Nikola Velkovski

meaning that the spread based on AZ is not needed

Nikola Velkovski

because t2.small can have only 2 tasks per machine ..