#aws (2020-05)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2020-05-01

vFondevilla avatar
vFondevilla

Yes, they said that the 1.15 slowness was something exceptional as they we’re refining processes for faster support of future versions of kubernetes

1
github140 avatar
github140

Hi, I’d like to provide a file in S3 to users in an easy way. This means no additional client software, no access key overhead. Therefore I thought about loading it to SharePoint online as this is rather the corporate platform including proper authentication. Unfortunately I didn’t find a connector yet. Anybody had a comparable challenge, solution, thoughts to share?

Matt avatar

RDS question, I’m seeing this when attempting to create a slot in Postgres:

ERROR:  must be superuser or replication role to use replication slots
Matt Gowie avatar
Matt Gowie

If I remember correctly, RDS doesn’t actually give real superuser privileges. It’s a big drawback. Let me look up a link for you.

Matt Gowie avatar
Matt Gowie
Why can't I create a superuser in AWS Postgresql instance?

I have an AWS EC2 instance connecting to an RDS instance (Postgresql). When I created the RDS instance, I told it the DB root’s username was: my_user1 and the password was password1. Now I’m attem…

Matt avatar

lol, yeah

Matt avatar

I ended up needing/creating a logical replication slot

Matt Gowie avatar
Matt Gowie

Cool — Glad you figured it out. Similar thing tripped me up a couple months back and I was pretty surprised by that one.

Matt avatar

It’s funny, I’ve been using AWS for years. . . so many rabbit holes

Matt Gowie avatar
Matt Gowie

Yeah — Learn something new all the time haha.

Matt avatar

However. . . I am authenticated as the superuser

msharma24 avatar
msharma24

AWS has launched new Cert badges https://www.youracclaim.com/earner/earned

Acclaim

Acclaim is an enterprise-class Open Badge platform with one goal: connect individuals with better jobs. We partner with academic institutions, credentialing organizations and professional associations to translate learning outcomes into web-enabled credentials that are seamlessly validated, managed and shared through Acclaim.

2020-05-03

omerfsen avatar
omerfsen

guys what do you recommed to hold Kubernetes Secrets on Github ? Has anyone used SealedSecrets of Bitnami ?

maarten avatar
maarten

Why do they need to be on Github ?

omerfsen avatar
omerfsen

where do you store them

omerfsen avatar
omerfsen

i need best practices

omerfsen avatar
omerfsen

SSM or Vault ?

maarten avatar
maarten

• You can store them in k8s secrets directly.

• You can store them in your pipeline directly and have k8s secrets populated from there

• You can use any other secret manager like vault & ssm together with the pipeline to populate the secrets

• You can use ssm or vault directly from the container or container helper

omerfsen avatar
omerfsen

i am doing the second one but i need to secure it more and also i need to make it versioned so i need some sort of SCM capability

maarten avatar
maarten

are you only using it for passwords or all kinds of config items ?

omerfsen avatar
omerfsen

only password

maarten avatar
maarten

Do you really need SCM on a password which you probably have stored somewhere else already ?

Raymond Liu avatar
Raymond Liu

Hi, do you use Helm to deploy? Helm secrets plugin can help to encrypt secrect information and you can commit encrypted file into git. And also Decrypt action will be invoke when you deply via helm

omerfsen avatar
omerfsen

Maarteen for GitOps stuff

Raymond Liu avatar
Raymond Liu
zendesk/helm-secrets

A helm plugin that help manage secrets with Git workflow and store them anywhere - zendesk/helm-secrets

omerfsen avatar
omerfsen
The GitOps FAQattachment image

​This is a living document that is meant to address the most frequently asked questions about GitOps. What you see today will evolve as we all learn more together. Contact us if you have any comments or questions you’d like to see answered on this page.

omerfsen avatar
omerfsen

Sealed-secrets solves the following problem: “I can manage all of my Kubernetes configs in Git, except Secrets.” Sealed Secrets is a Kubernetes Custom Resource Definition Controller that lets you to store sensitive information (aka secrets) in Git.

Raymond Liu avatar
Raymond Liu

I never use Sealed-secrets, but I believe the concept is very similar as other secrets management in version control

Raymond Liu avatar
Raymond Liu

However, the key point is how to store your private key.

1
Raymond Liu avatar
Raymond Liu

In AWS, you can store private key in KMS

1
omerfsen avatar
omerfsen

Currently i use KMS to encrypt my passwords etc and put the output string in terraform.tfvars file

omerfsen avatar
omerfsen

I want something better

omerfsen avatar
omerfsen

I randomly generate a pass, encrpyt it via KMS put it on terraform.tfvars (as encrypted output of KMS) and re-read and decrypt it to create kubernetes Secrets (which is not encrypted but only base64 decoded)

omerfsen avatar
omerfsen

Maybe i can talk with developers so they can develop the code to use injected variables with AWS KMS encrpyted instead of using it plaintext

maarten avatar
maarten

A pattern/workflow I used is this; have Terraform create a random password. Use that password to set the DB pw, store the password in SSM for later use by application or deployment.

roth.andy avatar
roth.andy

+1 for Sealed Secrets. Great tool that is completely cloud agnostic

1
omerfsen avatar
omerfsen

Implemented SealedSecret it is very easy to install/maintain also AWS recently published a blog for that https://aws.amazon.com/blogs/opensource/managing-secrets-deployment-in-kubernetes-using-sealed-secrets/

Managing secrets deployment in Kubernetes using Sealed Secrets | Amazon Web Servicesattachment image

Kubernetes is an open source system for automating the deployment, scaling, and management of containerized applications. It is especially suitable for building and deploying cloud-native applications on a massive scale, leveraging the elasticity of the cloud. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service for running a production-grade, highly available Kubernetes cluster on […]

Asrar avatar

Hi All, I am trying to figure how to get cloud front working with my elastic beanstalk app, i have used this terraform module https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment to setup my beanstalk app. I went for nat instance instead of nat gateway for cost reasons, and i have 2 instances running. My domain is managed as part of Gsuite, whilst i have permission to add a cname entry or something i cant do much more. I have tried to manually setup cloud front and i selected the origin domain to be the load balancer resource in the drop down. apart from that most of the page i left as defaults except whitelisting one cookie. When i try to hit the url generated for cloud front i keep getting 502 error wasnt able to connect to origin. My ssl certificate for my domain are managed by AWS ACM but my dns is managed by Gsuite. From what i understand i will eventually have to update my dns to point to cloudfront instead of the load balancer as it is now, but before i do that i want to make sure the cloudfront url is correctly configured

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

is your app running OK from the Beanstalk (not CloudFront) URL (e.g. `myapp.us-east-1.elasticbeanstalk.com) ?

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Asrar avatar

I managed to get it working, I had to create a ACM cert in Virginia in order for it to allow me to use my own domain. My load balancer was rejecting the Https request because the referrer was not correct I think so the Https negotiation was failing.

Asrar avatar

Now I just need to figure out how to terraform it

2020-05-04

2020-05-05

kenny avatar

is it possible to specify specific data center using Cloudformation (for example, I want to spin up subnets on the same datacenter with another account/subnets). Since the us-east-1a might not be the same physical location for me than someone else, how can this be done via Cloudformation?

RB avatar

noticed some of our devs were using an ELB for just a health check so the ELB itself has 0 requests. I understand that target groups have a health check and ECS can leverage the HEALTHCHECK argument in dockerfiles.

how do people here do health checks for applications without an ELB ? do you use a target group ? or do ecs / fargate with a healthcheck arg in the container ?

if target group, can this be used without an ALB ?

also wondering if anyone has used route53’s health checks in lieu of the above ?

Zach avatar

I take it these are applications that don’t serve web traffic then? They’re some sort of data/job processors?

RB avatar

yes for the most part if I understand those services correctly

Zach avatar

having an ELB do nothing but perform healthchecks is a bit expensive for what its doing, granted its like $20/month with no traffic but still. (And a bit absurd)

RB avatar

agreed!

Zach avatar

Probably best to have a conversation with them and figure what they’re trying to do and why …

Matt Gowie avatar
Matt Gowie

@RB I use the healthcheck configuration in ECS task definitions on most projects and wire it up to a script that either checks the PID of the primary process or if it’s a web server then executes a curl against that process.

ECS docs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_healthcheck

That’s if you need customization beyond Docker’s HEALTHCHECK. I personally haven’t used the Docker healthcheck before.

Task Definition Parameters - Amazon Elastic Container Service

Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types. The family and container definitions are required in a task definition, while task role, network mode, volumes, task placement constraints, and launch type are optional.

2
RB avatar

isnt it the same thing ?
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck in the Create a container section of the Docker Remote API and the HEALTHCHECK parameter of docker run

Matt Gowie avatar
Matt Gowie

@RB Ah it is the same. The Task Def just provides a wrapper around those same configuration arguments. I hadn’t looked deep enough at the Dockerfile HEALTHCHECK ref or the Docker run ref. Didn’t know it provided that level of granularity, so I though AWS was providing added configuration as a benefit.

Dmitry Kireev avatar
Dmitry Kireev

TIL

2020-05-06

loren avatar

Great summary of aws networking… Breathe it in, we’re all network engineers in the cloud… https://blog.ipspace.net/2020/05/aws-networking-101.html

AWS Networking 101

There was an obvious invisible elephant in the virtual Cloud Field Day 7 (CFD7v) event I attended in late April 2020. Most everyone was talking about AWS, how their stuff runs on AWS, how it integrates with AWS, or how it will help others leapfrog AWS (yeah, sure…). Although you REALLY SHOULD watch my AWS Networking webinar (or something equivalent) to understand what problems vendors like VMWare or Pensando are facing or solving, I’m pretty sure a lot of people think they can get away with CliffsNotes version of it, so here they are ;)

3
RB avatar
RB
08:48:03 PM

anyone know of a terraform module for opengrok or a similar code search app that can be run in aws ?

2020-05-07

btai avatar

aws global accelerator is the bees knees

1
Zach avatar

We tried it out for a few days and then immediately moved everything to it

1
btai avatar

@Zach would you say its pretty cheap?

Zach avatar

yah, if you aren’t pushing boatloads of data out

Zach avatar

its negligible

Zach avatar

we used it so that we could have a static IP on our entry ALB, and to move them private at the same time

btai avatar

we’re getting charged roughly ~400$/mo for ALB bandwidth

btai avatar

i am excited about moving the ALB to private and we will benefit from the static ip as well

Zach avatar

Ah, you are in a different tier of traffic than we are then haha

btai avatar

i assume global accelerator is a subset of that?

btai avatar

so we’ll probably pay maybe $600/mo

Zach avatar

oh, yah its on top of the ALB costs.

btai avatar

@Zach do u also use a CDN?

vFondevilla avatar
vFondevilla

I’m waiting on some IP whitelisting for using it. But loving the idea.

Milosb avatar

Hi, I am little bit confused regarding AWS Secrets Manager. If I have lets say one AWS Secret with 4 key/value pairs, is that considered as one secret or 4 secrets?

btai avatar

one!

sheldonh avatar
sheldonh

You can do single value secret or you can store multiple. For instance I did RDP users secret and setup multiple users on local systems, while my rds logic uses 1 for admin. It is a little clunky getting the values out as ends up being key = username sorta from what I remember. Basically was like: $secret.MyName = $secret.MyName.Value or something like that. Found it confusing at first and later I tried to avoid embedded multiple and instead tried to stick with single value when possible

Milosb avatar

My understating is that should be one secret, but who knows

2020-05-08

RB avatar

is there a way to do a rolling deploy in ECS without updating the task definition ?

jose.amengual avatar
jose.amengual

no if you want to tag the images

jose.amengual avatar
jose.amengual

if you are doing “latest” then you could

RB avatar

yes, let’s assume the docker image stays the same

RB avatar

so the image doesnt need to change in the task definition

jose.amengual avatar
jose.amengual

then you can

RB avatar

how ?

jose.amengual avatar
jose.amengual

you set min to 50% and max to like 200%

jose.amengual avatar
jose.amengual

you make sure you have enough resources to run more than one task

jose.amengual avatar
jose.amengual

but this is not blue green

jose.amengual avatar
jose.amengual

so you will have old/new containers running at the same time

jose.amengual avatar
jose.amengual

while all get updated

RB avatar

i was just having an issue finding this in the cli

jose.amengual avatar
jose.amengual

one sec

RB avatar

i think i found it //stackoverflow.com/a/50497611/2965993>

AWS ECS restart Service with the same task definition and image with no downtime

I am trying to restart an AWS service (basically stop and start all tasks within the service) without making any changes to the task definition. The reason for this is because the image has the la…

RB avatar

im dumb for not finding that earlier

RB avatar

does that command align with yours @jose.amengual

jose.amengual avatar
jose.amengual

we do thi s:

jose.amengual avatar
jose.amengual
aws ecs update-service --cluster ${clusterName} --service ${serviceName} --force-new-deployment
RB avatar

brilliant, thank you very much!

jose.amengual avatar
jose.amengual

lol is the same command

jose.amengual avatar
jose.amengual

the key is on the deployment config

jose.amengual avatar
jose.amengual

not the command

RB avatar

looks like this can be even easier using a task set but currently this is not in terraform yet

Support ECS TaskSet · Issue #8124 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…

jose.amengual avatar
jose.amengual

correct

2020-05-09

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
EC2 Price Reduction – For EC2 Instance Saving Plans and Standard Reserved Instances | Amazon Web Servicesattachment image

It is my great pleasure to tell you about a price reduction for Amazon Elastic Compute Cloud (EC2) customers who plan to use, Standard Reserved Instances or EC2 Instance Saving Plans. The price changes are already in effect, and so anyone buying news RIs or a new EC2 Instance Saving Plan will be able to […]

2020-05-10

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
5 AWS Services You Should Avoid!attachment image

Get ready for some personal and definitely opinionated opinions!

Matt Gowie avatar
Matt Gowie

He makes some decent points against due to his own experience, but it seems in a number of those cases he should have RTFM and done some service comparison before jumping head first into those services.

The lambda one is pretty funny — sounds like the worst nightmare of them all if that’s the implementation path their team went down.

5 AWS Services You Should Avoid!attachment image

Get ready for some personal and definitely opinionated opinions!

1
maarten avatar
maarten

Can’t take the list seriously as Elastic Beanstalk is not even mentioned

1

2020-05-11

mfridh avatar

aws-okta is on a hiatus? Someone picking that up? … or did you all move on to more hipster solutions? I think aws-okta at least has very good usability… Plus, I did this to get rid of some annoying obstacles: https://github.com/frimik/aws-okta-tmux

frimik/aws-okta-tmux

Wrapper that makes aws-okta cred-process communicate with the user via tmux panes. - frimik/aws-okta-tmux

Zach avatar

Did not seem like anyone was going to pick up aws-okta maintenance, so we moved to gimme-aws-creds (maintained by Nike)

frimik/aws-okta-tmux

Wrapper that makes aws-okta cred-process communicate with the user via tmux panes. - frimik/aws-okta-tmux

loren avatar

i’ve really been impressed by aws-okta-processor, https://github.com/godaddy/aws-okta-processor/

godaddy/aws-okta-processor

Okta credential processor for AWS CLI. Contribute to godaddy/aws-okta-processor development by creating an account on GitHub.

1
RB avatar

how do these methods work for delivering aws creds ?

Zach avatar

you configure Okta as a SAML provider to AWS, and these allow you to assume a role in the account. It then passes back a temorary credential pair

loren avatar

and if they do it right, they refresh the aws temporary cred automatically (until your Okta session expires, then it re-prompts)

Zach avatar

^ we haven’t gotten that part to work correctly

loren avatar

try aws-okta-processor

Zach avatar

I’ve only just managed to get everyone switched to gimme-aws … I might get tarred and feathered if I make anyone swap again

1
loren avatar

it’s the only tool i’ve used that gets caching of both aws and okta sessions right, and works with mfa, and works with credential_process, and manages stdout/stderr well enough to work with pretty much any tool built for any sdk

loren avatar

plus the maintainers have so far been super responsive and open to prs

loren avatar

well, it’s your box, authenticate however you like if they keep using gimme-aws-creds that’s fine. you can enjoy the goodness of aws-okta-processor

jose.amengual avatar
jose.amengual
Rotating AWS Secrets Manager Secrets for One User with a Single Password - AWS Secrets Manager

Automatically rotate your secrets by configuring Lambda functions invoked automatically on a schedule that you define. You can use this scenario for authentication systems that allow only changing the password. For a list of the different available scenarios, see .

jose.amengual avatar
jose.amengual

I wonder how effective it is? there seems to be quite a bit of config + lambda to just enable this feature that when I compare this to Hashicorp Vault seems more convoluted

Rotating AWS Secrets Manager Secrets for One User with a Single Password - AWS Secrets Manager

Automatically rotate your secrets by configuring Lambda functions invoked automatically on a schedule that you define. You can use this scenario for authentication systems that allow only changing the password. For a list of the different available scenarios, see .

maarten avatar
maarten

@jose.amengual what will you be using it for ?

maarten avatar
maarten

The AWS counterpart of Vault would be SSM by the way, not the secret manager. For Aurora you can use this: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html

IAM Database Authentication - Amazon Aurora

Authenticate to your DB instance or cluster using IAM database authentication.

jose.amengual avatar
jose.amengual
Rotating Secrets for Supported Amazon RDS Databases - AWS Secrets Manager

Automatically rotate your Amazon RDS database secrets by using Lambda functions provided by Secrets Manager invoked automatically on a defined schedule.

jose.amengual avatar
jose.amengual

I want to use the automatic secret rotation feature for programatic access

maarten avatar
maarten

~I think IAM Auth is perfect as alternative there, unless you expect 200 new connections per second.~

jose.amengual avatar
jose.amengual

we are going to use IAM auth for users

jose.amengual avatar
jose.amengual

but for apps we want to rotate the secret

jose.amengual avatar
jose.amengual

I don’t know if IAM auth is recommended for apps

maarten avatar
maarten

I’m thinking it might be a tad slower with connection creation, so I’m not sure. I take it back.

jose.amengual avatar
jose.amengual
We recommend the following when using the MySQL engine:

    Use IAM database authentication as a mechanism for temporary, personal access to databases.

    Use IAM database authentication only for workloads that can be easily retried.

    Don't use IAM database authentication if your application requires more than 200 new connections per second.
jose.amengual avatar
jose.amengual

MySQL Limitations for IAM Database Authentication

jose.amengual avatar
jose.amengual

so for users is fine

jose.amengual avatar
jose.amengual

but apps maybe not

raghu avatar

Is there a iam policy example that deny whole subnet and just allow one ip from that subnet?

Matt Gowie avatar
Matt Gowie

@raghu What’s the goal you’re trying to accomplish?

Typically IAM Policies do not involve network whitelisting / blacklist. You can restrict certain actions only be allowed if they’re coming from a certain IP using the SourceIP Condition, but that is pretty uncommon.

If you’re trying to do network related whitelisting / blacklisting for requests in-coming to your application then there are typically 3 places you should look (in order of preference likely):

  1. Security Group
  2. VPC NACL
  3. WAF Rules
AWS: Denies Access to AWS Based on the Source IP - AWS Identity and Access Management

Use this IAM policy to deny access to AWS based on the source IP.

raghu avatar

Oops, I mean, In bucket policy

raghu avatar

Absolutely, but i have to add one more condition on top of that.Thankyou @Matt Gowie

1

2020-05-12

omerfsen avatar
omerfsen

Hello, It is a bit K8S/AWSEKS related so i don’t know where to post Anyway is there a way to make a Pod to get ingress(inbound) traffic only from ALB Ingress ? As ALB Ingress will change Source IP address i can still get traffic to this pod but simply will deny access from other Pods that is living on same namespace.

chris avatar

I am interested if you are able to do this… I have been wondering the same thing. I am not setting things up yet for a new project, but was wondering the same things as I was laying out the architecture

omerfsen avatar
omerfsen

Still no luck

omerfsen avatar
omerfsen

The only thing comes to my mind is seperating namespaces

omerfsen avatar
omerfsen

And installing 1 ingress for each ns

omerfsen avatar
omerfsen

And seperate ns communication with networkpolicy

chris avatar

Hopefully someone has a simpler answer.

omerfsen avatar
omerfsen

Also that means a seperate external-dns in each ns as external-dns can only wathc 1 ns for each deployment

omerfsen avatar
omerfsen

@chris so far nothing

omerfsen avatar
omerfsen

The only answer is istio

chris avatar

Thanks for the update

RB avatar

interesting that one cannot remove the Server header from cloudfront… but at least it no longer will show AmazonS3

https://stackoverflow.com/questions/56710538/cloudfront-and-lambdaedge-remove-response-header

Cloudfront and Lambda@Edge: Remove response header

I am trying to remove some headers from a Cloudfront response using Lambda@Edge on the ViewerResponse event. The origin is an S3 bucket. I have been successful to change the header like this: exp…

2020-05-13

curious deviant avatar
curious deviant

Hello, I am a bit confused around the usage of --queue-url and --endpoint-url when a VPC Interface endpoint is created. One of the three private DNS is exactly the same as the context of the --queue-url and if I do a dig it indeed resolves to a private IP. What is the purpose of the other two DNSes with vpce-* in their names and when should they be used? One of them seems region specific while the other seems AZ specific.

curious deviant avatar
curious deviant
Interface VPC endpoints (AWS PrivateLink) - Amazon Virtual Private Cloud

An interface VPC endpoint (interface endpoint) enables you to connect to services powered by AWS PrivateLink. These services include some AWS services, services hosted by other AWS customers and Partners in their own VPCs (referred to as endpoint services

Joey avatar

Hey guys, total AWS noob here

Joey avatar

I have a subdomain attached to a Route53 privately hosted zone

Joey avatar

It’s supposed to load a webpage, and I want to be able to give access to certain people. Preferably with a username / password, though just whitelisting individual IP addresses would be doable

Joey avatar

What strategy should I use to go about this?

Darren Cunningham avatar
Darren Cunningham

it depends on the target of your subdomain – are you hosting your content in S3, on an EC2, in Fargate?

Joey avatar

@Darren Cunningham thanks for responding! I’m using EC2

Joey avatar

I was able to get my subdomain to work on a public hosted zone, but I want to restrict access to it. So I thought that moving to a private hosted zone was the move, and idk where to go from here

Darren Cunningham avatar
Darren Cunningham

private hosted zone juts means that your zone would only resolve within the VPC – that’s not what you want

nileshsharma.0311 avatar
nileshsharma.0311

@Darren Cunningham use a security group , restrict access to port 443 or 80 , whatever port the webserver is running on

nileshsharma.0311 avatar
nileshsharma.0311

For username and password, you might use ngnix as proxy , whitelist the IPs of the users in the security group

Matt Gowie avatar
Matt Gowie

@Joey — Sounds like a combo of putting Basic Auth in front of your app + updating the Security Group inbound rules for your Application’s SG would be what you want.

Joey avatar

Is it possible to do this without updating the app?

Joey avatar

Would the security group be enough?

Darren Cunningham avatar
Darren Cunningham

basic auth would require updating the app

security group is “enough” if you’re ok with it

Joey avatar

Also, how do I find the security group? Is it tied to the VPC?

Joey avatar

yeah I’m okay with just having the security group to allow it

Matt Gowie avatar
Matt Gowie

Your security group is tied to your EC2 instance.

Darren Cunningham avatar
Darren Cunningham

it would be easier to find by going to the Ec2 and seeing which SGs are allocated specifically to it

Darren Cunningham avatar
Darren Cunningham

if you go to VPC you’re going to find ALL of them in your VPC/Account

Matt Gowie avatar
Matt Gowie

And @Darren Cunningham is correct — you don’t want a private hosted zone. That means your DNS entry for your site will only be visible inside your VPC, which means the public won’t be able to access it. You want a public hosted zone.

Joey avatar

@Matt Gowie okay, but is there a way for me to just choose who accesses the VPC?

Matt Gowie avatar
Matt Gowie

The VPC as a whole? You can do that via your VPC’s NACL (Network Access Control Lists) but you likely just want to do that via Security Groups.

Joey avatar

Yeah right now the NACL says “All Traffic” but I still can’t load the domain on my computer

Joey avatar

so maybe it is best that I just make it a public hosted zone with a security group, thanks

Matt Gowie avatar
Matt Gowie

Yeah, go with a public hosted zone and check your security group’s inbound rules. Likely your NACL is not the issue. The default rules are typically user friendly.

Joey avatar

When I do this, would I be able to have the security groups recognize a username/password? Or do I just simply whitelist specific IP addresses?

Matt Gowie avatar
Matt Gowie

Security Groups purely deal with the network so they cannot be configured to help with username/pass. Username / password needs to be a layer 7 level so on a proxy or application layer.

Joey avatar

@Matt Gowie so I’ve been trying a lot of combinations of security groups the past 2 hours.

This works:

Type - All Traffic
Protocol - All
Port range - All
Destination - 0.0.0.0/0

This doesn’t work:

Type - All Traffic
Protocol - All
Port range - All
Destination - <my ip address>/32

& I don’t know why

Joey avatar

(for outbound rules)

Joey avatar

I’m ready to give up and just create security in the app. Do you have any idea why inserting my personal IP address doesn’t seem to work though?

Matt Gowie avatar
Matt Gowie

The second one should work. How are you finding your IP?

Joey avatar

I select “My IP” from the dropdown

Matt Gowie avatar
Matt Gowie

Ah. Interesting. And your request times out?

Joey avatar

granted, this is an ELB security group created from Kubernetes

Joey avatar

if that at all makes a difference

Matt Gowie avatar
Matt Gowie

Ah if you’re in K8s land then I’m unsure.

Joey avatar

gotcha I’ll post on that channel

Joey avatar

thanks for the help!

raghu avatar

Hi Guys, Is there any better auto remediation tool you guys are using for aws security,like remediating ports that are open globally etc. We are currently using cloud prisma, but its currently just detecting and sending us notifications,but auto remediations are not ready yet. I am planning to write my own scripts meanwhile to remediate them, but my scripts call boto3 api and it needs to call every 30 mins or so how i define, i was thinking if i can log the actual real time data and auto remediate it when ever it detects any alert. Any inputs how to do it.

David Scott avatar
David Scott

What about AWS Config? It supports auto-remediation.

1

2020-05-14

Zach avatar

Anyone have experience with RDS IAM connection throttling?

1
sheldonh avatar
sheldonh

Pray tell. That sounds like an interesting story

Zach avatar
Zach
01:24:58 AM

¯_(ツ)_/¯

sheldonh avatar
sheldonh

We don’t need any of that newfangled pooled connections

Zach avatar

We were looking into RDS password rotation and the IAM Auth looked appealing, I heard good things about it, but was cautioned “be careful not to get throttled” because, supposedly, they will deny connections for 10-15minutes

Zach avatar

read the docs, tested, warned my teams “make sure you don’t blow through the connection rate”

Zach avatar

Now we have a couple of cases where we suspect we are being throttled, and I have a support ticket open … but AWS is incredibly vague about this

Zach avatar

the postgres logs, once we enabled IAM Auth, are filled with all sorts of nonsense that’s hard to decipher. There’s no single error that says “stop connecting so fast” so we don’t know what we’re looking for

sheldonh avatar
sheldonh

Is there any RDS metric you can pull that might shed light?

Zach avatar

Nope

Zach avatar

The docs on it just give you some very rough guidelines. It works great though - until you hit the wall. At least we think we’re hitting the wall, we can’t tell for sure.

vFondevilla avatar
vFondevilla

omg, I was thinking about enabling that in the near future. This interests me.

Darren Cunningham avatar
Darren Cunningham

same, please update the thread when you figure out what’s going on

Zach avatar

I strongly suspect it was one application in particular causing the issue because it was not using pooled connections, and not even using persistent connections I’m just waiting on AWS support to tell me that yes, those errors in the PG logs are “iam throttling”

Darren Cunningham avatar
Darren Cunningham

it’s silly that there’s not a metric that you could be monitoring

Zach avatar

I agree … and I will be commenting about that to our support rep. I suspect they’re obfuscating it for “security”

omerfsen avatar
omerfsen

There is a postgresql query to show active connections

omerfsen avatar
omerfsen
Right query to get the current number of connections in a PostgreSQL DB

Which of the following two is more accurate? select numbackends from pg_stat_database; select count(*) from pg_stat_activity;

Zach avatar

Yah I can see the current connections no problem - but those are the ones already established. RDS exposes that as a metric directly.

Zach avatar

The issue that I cannot see how many connections are being attempted through RDS IAM - which has a hidden max rate of connections per second

Zach avatar

I could be well under the max connections the DB will allow but if I try to create those too fast, IAM will throttle the entire RDS instance - which is bad enough for a single application, but if you’re being economical and several apps with databases on a single instance, its really bad

Pierre Humberdroz avatar
Pierre Humberdroz

also interested in this

omerfsen avatar
omerfsen

@Zach you can maybe use vpc flow for that. You can query Tcp Syn packets coming to RDS 5 min interval or 1 min interval etc.

omerfsen avatar
omerfsen
VPC Flow Logs - Amazon Virtual Private Cloud

Create a flow log to capture information about IP traffic going to and from your network interfaces.

omerfsen avatar
omerfsen
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 52.213.180.42 10.0.0.62 43418 5001 52.213.180.42 10.0.0.62 6 100701 70 1566848875 1566848933 ACCEPT 2 OK 
omerfsen avatar
omerfsen

“ACCEPT 2 OK “ 2 means SYN here.

omerfsen avatar
omerfsen

SELECT SUM(numpackets) AS packetcount, destinationaddress FROM vpc_flow_logs WHERE destinationport = 5432 AND protocol = 6 AND tcp-flags = 2 AND date > current_date - interval ‘5’ min GROUP BY destinationaddress ORDER BY packetcount DESC

You may need to create table like https://github.com/awsdocs/amazon-athena-user-guide/blob/master/doc_source/vpc-flow-logs.md including tcp-flags

awsdocs/amazon-athena-user-guide

The open source version of the Amazon Athena documentation. To submit feedback & requests for changes, submit issues in this repository, or make proposed changes & submit a pull request. - …

2020-05-15

2020-05-17

2020-05-18

Maciek Strömich avatar
Maciek Strömich

hey, anyone experienced aws fargate inability to resolve hostnames? After migration I’ve noticed that for some number of requests Django is not able to resolve RDS hostname which is ultra worrying.

Padarn avatar

Hi all. I’m trying to understand how to replace the use of security groups when using transit gateway

Padarn avatar

I’m used to using VPC peering, where peered security groups can be referenced, but I’m not clear on how this is meant to work if using TGW

jose.amengual avatar
jose.amengual

in TGW you still need subnets and IPs AFAIK

jose.amengual avatar
jose.amengual

the only case when this is not true is when you use Shared VPCs ( which we are using)

jose.amengual avatar
jose.amengual

then you use SGids

Padarn avatar

Got it. I’ve set it up with CIDR.. does the trick, just a little less secure

Padarn avatar

I wonder how much you need to know about Amazons internals to understand why they descided to do it this way

jose.amengual avatar
jose.amengual

well I guess their mistakes keep me employed

Padarn avatar

and keep the AWS wheels turning

jose.amengual avatar
jose.amengual

lol

Padarn avatar

my company has just decided to hard pivot towards Azure

Padarn avatar

no good reason, but lots of work to do

2020-05-19

RB avatar

any reason why a route53 record would be pointed at an elastic ip of an ec2 instance instead of creating a load balancer and pointing the r53 record to it?

maarten avatar
maarten

It’s much cheaper.

1
RB avatar

if you have an instance with an EIP associated to a route53 record, the instance was killed, how do you make sure the route53 record is then killed so no one gets a hold of the EIP and takes over the subdomain ?

maarten avatar
maarten

You need to see the EIP as reserved for the AWS account, no-one else can use it unless you explicitly release the EIP.

Matt Gowie avatar
Matt Gowie

The lifecycle of an EIPs is not dependent on the instance is associate to AFAIK. You retain that EIP even if the associated instance dies.

Matt Gowie avatar
Matt Gowie

Oh yes — You gotta reserve it though

RB avatar

we release our EIPs

RB avatar

sounds like we gotta write up a script for this

maarten avatar
maarten

@RB Just don’t release your EIP’s, enough IPV4 for everyone

this1
1
RB avatar

thats true. hmm. i thought we had to release the eips due to cost.

RB avatar

from https://aws.amazon.com/ec2/pricing/on-demand/, i suppose the cost is pretty low

EC2 Instance Pricing – Amazon Web Services (AWS)

Amazon EC2 pricing is based on instance types and the region in which your instances are running. There is no minimum fee and you only pay for what you use.

maarten avatar
maarten

Yes, I mean, if you have 10 unused EIP’s the whole time, you might want to clean up

maarten avatar
maarten

otherwise it’s not the hassle

roth.andy avatar
roth.andy

reserved EIPs are not very expensive. Much cheaper than a LoadBalancer. If you only have 1 instance that is sitting behind the R53 record, then pointing at the EIP is a reasonable approach

2020-05-20

James Woolfenden avatar
James Woolfenden

question, say you wanted to create a static site in aws and your not allowed to use an s3 bucket, besides ec2 what are your options. i tried crying which didnt get me anywhere?

Matt Gowie avatar
Matt Gowie

@James Woolfenden AWS Amplify.

Matt Gowie avatar
Matt Gowie

Really great option. Comes with CI / CD, ENV var support, building pipelines, Git integration, etc etc etc. It’s the best.

James Woolfenden avatar
James Woolfenden

yeah that could do it, it was supposed to be typescript serverless congnito with app lambdas and api gateway

James Woolfenden avatar
James Woolfenden

type of thing

randomy avatar
randomy

doesn’t amplify use s3 under the hood?

randomy avatar
randomy

(if not, where does it store html files?)

Matt Gowie avatar
Matt Gowie


yeah that could do it, it was supposed to be typescript serverless congnito with app lambdas and api gateway
Haha this doesn’t sound static.

Matt Gowie avatar
Matt Gowie

@randomy Under the hood I believe it is using S3. But it isn’t using S3 directly troll

1
randomy avatar
randomy

that’s cheating

randomy avatar
randomy

you could use cloudfront + lambda@edge to serve static files. you’ll probably cry more though.

randomy avatar
randomy


not allowed to use an s3 bucket
this is crazy

1
Matt Gowie avatar
Matt Gowie

Yeah — I feel like trying to host static sites without S3 in some way is going to be painful. Maybe the right question to ask is: Where is the requirement against / opposition to S3 coming from?

3
Matt Gowie avatar
Matt Gowie

You could use Dyanmo DB to store the contents of each file in a separate key and then create a docker container and host it on ECS. Then for each page request you pull the corresponding dynamo DB HTML content and then serve that.

You could do that

1
Matt Gowie avatar
Matt Gowie

The above is obviously just a poor joke. Not trying to poke fun too much.

roth.andy avatar
roth.andy

You could chuck the site in a container and run it on Fargate

Mike F. avatar
Mike F.

Sheehs, If I had too… probably the API Gateway with Lambda@Edge with static files..not in S3? Its all gonna bit more than a bit hacky though

James Woolfenden avatar
James Woolfenden

amplify might work, customer suggest using azure

roth.andy avatar
roth.andy
Static website hosting in Azure Storage

Azure Storage static website hosting, providing a cost-effective, scalable solution for hosting modern web applications.

James Woolfenden avatar
James Woolfenden

also fargate

James Woolfenden avatar
James Woolfenden

thanks @roth.andy i was hoping not to go multi cloud on one small website

James Woolfenden avatar
James Woolfenden

what would i know…

randomy avatar
randomy

github pages?

James Woolfenden avatar
James Woolfenden

would be easy enough, aye

keen avatar


Where is the requirement against / opposition to S3 coming from?
well, if this webpage were the landing page for a facebook app, for example, s3 hosting doesnt work because even for a nice flat static page it has to answer a POST request on /…..

keen avatar

(no idea if that still applies, I haven’t tried to burn that bridge since the last facebook game I worked on deployed in ‘16..)

James Woolfenden avatar
James Woolfenden

god no, this the enterprise. Security says no to s3 buckets, yes to Azure Blobs.

Matt Gowie avatar
Matt Gowie

That doesn’t make no sense.

this1
randomy avatar
randomy

Fire your security team

randomy avatar
randomy

(Just kidding of course, but that seems very strange and I can’t imagine why a company would be OK with AWS but not S3)

this2
James Woolfenden avatar
James Woolfenden

@randomy quite. some eejit made some blanket policy about s3 that buckets cant be public and now that policy is baked into every account.

Matt Gowie avatar
Matt Gowie

Ah that connects the dots. Still not a smart policy, but makes sense.

Amplify would get around that for you. The bucket and everything underneath Amplify are hidden from everyone, so I don’t believe you would run into that account restriction AFAIK.

Kevin Doveton avatar
Kevin Doveton

I’d just put cloudfront in front of a private s3 bucket

2
sheldonh avatar
sheldonh

Netlify!

sheldonh avatar
sheldonh

I will say I’m really impressed with netlify just getting up and running. They can really take you from zero all the way up to running and minutes for a full static website solution. If you’re using a static website generator like I am with Hugo for example for blogging you can even add forestry into the mix and have a nice web editor that is all CICD. To do most of what netlify would do would be just an exorbitant amount of effort in my opinion. You can add serverless, CICD, and more for free. Only need to pay at more commercial grade tier.

sheldonh avatar
sheldonh

And you can redirect your GitHub pages to netlify with a simple file in the repo too. Definitely take a look at it. They even have deploy templates with the full stack ready if you take a look.

James Woolfenden avatar
James Woolfenden

@Kevin Doveton you cant match a private bucket with cloudfront

Kevin Doveton avatar
Kevin Doveton
Restricting Access to Amazon S3 Content by Using an Origin Access Identity - Amazon CloudFront

Restrict access to your Amazon S3 content with an Amazon CloudFront origin access identity (OAI).

James Woolfenden avatar
James Woolfenden

ill have to stomach a static sie in azure, its not ideal but its “ok” with security

James Woolfenden avatar
James Woolfenden

i cant actually make the bucket to do that

James Woolfenden avatar
James Woolfenden

i know in effect it will make it private enough, besides this is supposed to be a website so the content is actually public

James Woolfenden avatar
James Woolfenden

thanks for all your input, hoping for someone to see sense today

sheldonh avatar
sheldonh

Still worth checking out netlify if possible. It makes static site stuff a breeze. It’s kinda like jumping into a first class product for that. Only reason I wouldn’t use if I had to do some internal to the box static site or something.

Kevin Chan avatar
Kevin Chan

Does any one here have some good links with ecs in terraform? seems like the terraform-ecs-alb-service-task seems to be deleting the task definitions. I’m coming from the cloudformation/troposphere world where I updated the task definition and then it persisted and then though automation I updated the service to point at the latest task definition.

Aleksandr Fofanov avatar
Aleksandr Fofanov

terraform-ecs-alb-service-task is a good module. There are also good modules from Airship - https://airship.tf/getting_started/airship.html Regarding deletion of task definitions - it doesn’t delete them. You can’t change the task definition itself, if you want to make changes - you create a new revision of this task definitions with the changes applied and then update your ECS service to this new revision of task definition. So terraform’s aws_ecs_task_definition resource does exactly that - creates a new revision and deactivates the previous if you make changes. In the plan you’ll see that it will destroy (deactivate) the existing task def resource (revision) and create a new one (revision).

Airship Modules | Airship Modules

Flexible Terraform templates help setting up your Docker Orchestration platform, resources 100% supported by Amazon

Aleksandr Fofanov avatar
Aleksandr Fofanov

There is also a flag ignore_changes_task_definition in terraform-aws-ecs-alb-service-task module that allows to ignore external changes to task def applied to the service in case you modify task definition externally (e.g. changing image tag in CI/CD pipelines) and don’t want terraform to revert changes to task def on each apply

Kevin Chan avatar
Kevin Chan

Maybe delete was the wrong word. But from my understanding inactive tasks are not allowed to referenced by new services, so it’s almost as useless in my mind. Keeping a history of active task definition allowed me to “roll back” bad deployments quickly, but updating a container version and updating the task again works as well. I guess I’m just a bit confused on how to use terraform for ecs.

Aleksandr Fofanov avatar
Aleksandr Fofanov

How to do you deploy new images to ecs? From CI/CD pipelines or manually? Do you use any CLI tool for deployment?

Kevin Chan avatar
Kevin Chan

In the past I queried the CFT, updates the version number using sed, and applied the new change set.

Aleksandr Fofanov avatar
Aleksandr Fofanov

You could use https://github.com/fabfuel/ecs-deploy for example to deploy from CI/CD pipelines. There is an option to prevent task def de-registration when you do a deployment if that works for you.

fabfuel/ecs-deploy

Powerful CLI tool to simplify Amazon ECS deployments, rollbacks & scaling - fabfuel/ecs-deploy

Kevin Chan avatar
Kevin Chan

Coo will check it out. Was hoping there was a pure tf solution, but poking around google doesn’t provide the solution I’m used to in cft world.

Kevin Chan avatar
Kevin Chan

So you just use ecs to provision the infra and just do changes to app versions using tools?

Aleksandr Fofanov avatar
Aleksandr Fofanov

Yes. This is just one way of doing deployments to ECS.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, pretty much. At least that’s what we mostly see. Agree that a pure terraform approach would be nice, but the experience is more pleasant with some of the other tools for deployment.

1
Kevin Chan avatar
Kevin Chan

Thanks I’ll look in to the tools

jose.amengual avatar
jose.amengual

terraform is not really a deployment tool

jose.amengual avatar
jose.amengual

for code/containers

jose.amengual avatar
jose.amengual

in my opinion

2020-05-21

Igor avatar

We’re currently just using awscli with circleci to push ecs deployments, but it seems very limited. No notifications out-of-the-box, no support for rollbacks. And then of course it messes with Terraform. I don’t want to ignore changes to task definitions, because we sometimes change the environment variables/secrets associated with the task. I wish there was a better way.

Matt Gowie avatar
Matt Gowie

@Igor I think my suggestion to you would be to steer clear of managing env vars / secrets in the Task Definition itself. It’s painful and causes issues like this.

I suggest storing env vars + secrets in SSM Parameter Store and then using Chamber in your Dockerfile to pull those in at runtime. That combo is really nice.

https://aws.amazon.com/blogs/mt/the-right-way-to-store-secrets-using-parameter-store/

segmentio/chamber

CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.

The Right Way to Store Secrets using Parameter Store | Amazon Web Servicesattachment image

This guest post was written by Evan Johnson, who works in the Security team at Segment. The way companies manage application secrets is critical. Even today, the most high profile security companies can suffer breaches from improper secrets management practices. Having internet facing credentials is like leaving your house key under a doormat that millions […]

Igor avatar

I like the idea of using chamber to manage SSM Param Store, but I don’t like the idea of embedding chamber into the image.

Matt Gowie avatar
Matt Gowie

It’s actually ultra light weight since it’s just a go binary and with Docker multi-stage builds it makes it super easy to pull in without many changes.

Also had huge benefit that it works with the ECS task role to pull the data from SSM / decrypt with KMS.

Igor avatar

ECS does the same with secrets, but adding/removing/renaming secrets does require a new task to be created

Igor avatar

Has anyone had luck with using CodeDeploy?

Kevin Chan avatar
Kevin Chan

playing with that now. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-blue-green.html It beats completely over writing my task definition and not having a trail in the family, even tho inactive tasks count against you on the qoutas - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-quotas.html

Tutorial: Creating a Service Using a Blue/Green Deployment - Amazon Elastic Container Service

Amazon ECS has integrated blue/green deployments into the Create Service wizard on the Amazon ECS console. For more information, see .

Amazon ECS Service Quotas - Amazon Elastic Container Service

The following table provides the default service quotas, also referred to as limits, for Amazon ECS for an AWS account which can be changed. For more information on the service limits for other AWS services that you can use with Amazon ECS, such as Elastic Load Balancing and Auto Scaling, see

Kevin Chan avatar
Kevin Chan

Most of my deploys with ecs have been update the task def and then tell the service to use the latest. Monitor the performance (which took time), and then roll back if needed. having a blue green on code deploy now seems amazing.

Igor avatar

I remember seeing a detailed article of the issues someone ran into with codedeploy blue/green

Igor avatar

That scared me away from it at the time

Igor avatar
What I learned using AWS ECS Fargate Blue/Green Deployment with CodeDeploy

If you’re considering using Blue/Green deployments for ECS services and want to learn about the CodeDeploy tradeoffs, read on.

Milosb avatar

I used in in past for ec2 instances and worked pretty well. Only thing that I didn’t like was missing option to specify script arguments in appspec.yaml. So I ended in making wrappers of deploy scripts which were fully functional by itself already.

Milosb avatar

and last agent release is from Dec 6, 2018

RB avatar

how do you all automatically give ssh creds safely to everyone ?

bradym avatar

I use ansible to create their accounts and grab their public keys from github. Then they can login using the same keys they use for pushing code. Just add .keys to a user’s github profile URL and you get their public keys.

10001
1
bradym avatar

Definitely not aws-centric, but it’s easy.

RB avatar

ah cool. we dont use ansible, we use puppet and trying to migrate off. we were considering doing something similar with ssm or use ec2 instance connect.

bradym avatar

If I was starting now I’d probably look at ssm / ec2 instance connect first, but we’re pretty happy with ansible. Also, at this point we don’t have much that’s not in k8s so our ansible usage is pretty limited these days.

bradym avatar

I should also mention, gitlab also gives you user public keys if you add .keys to their profile page.

Matt Gowie avatar
Matt Gowie

Migrated away from bastions / ssh for client projects and only use ssm-agent / Session Manager. Got tired of managing client SSH keys. Session manager + gossm is a really nice combo.

https://github.com/masterpointio/terraform-aws-ssm-agent

masterpointio/terraform-aws-ssm-agent

Terraform module to create an autoscaled SSM Agent instance. - masterpointio/terraform-aws-ssm-agent

1
Matt Gowie avatar
Matt Gowie
gjbae1212/gossm

Interactive CLI tool that you can connect to ec2 using commands same as start-session, ssh in AWS SSM Session Manager - gjbae1212/gossm

1
Mike F. avatar
Mike F.

What @Matt Gowie showed is a really slick implementation. I like it!

this1
RB avatar

using that setup @Matt Gowie what does the ssm agent accomplish ?

RB avatar

seems interesting but unsure how it all works

Matt Gowie avatar
Matt Gowie

It’s a single instance in your ~account~PC that is dedicated to being your session manager instance. I have a couple Fargate only projects so that small instances acts as the “bastion” into the private network since I don’t have others.

Matt Gowie avatar
Matt Gowie

That SSM Agent instance is also setup for session logging to S3 / CloudWatch, so sessions are auditable.

Matt Gowie avatar
Matt Gowie
Time to ditch your Bastion hostattachment image

Time to ditch your Bastion host and change over to using AWS SSM Session Manager

1
msharma24 avatar
msharma24
SSH Key Manager

Userify is a lightweight SSH key manager for the cloud.

Zach avatar

Speaking of gossm - we were moving everyone over to that, with anticipation of disabling ssh access once done. But now a bunch of people have complained that their terminals are all sorts of f’d up when connected to the remote hosts. And I’ve noticed it too - you get weird behavior if you try to cycle through your shell history, or even using page up/down in vi or less when looking at a file. Has anyone else seen that, and is there a solution? We can’ tell if the issue is SSM or goSSM

Zach avatar

(we just disabled logging in dev and the behavior went away so maybe this is AWS’ fault)

Matt Gowie avatar
Matt Gowie

Huh — I haven’t seen that. I’ll keep an eye out… that doesn’t sound great.

Zach avatar

It may be our config, we’re opening a support ticket to see if we missed something

1
Matt Gowie avatar
Matt Gowie

Post back here if you find out more… interested if I’m going to have clients run into that issue.

Kevin Chan avatar
Kevin Chan
Set up EC2 Instance Connect - Amazon Elastic Compute Cloud

Amazon Linux 2 2.0.20190618 or later and Ubuntu 20.04 or later comes preconfigured with EC2 Instance Connect. For other supported Linux distributions, you must set up Instance Connect for every instance that will support using Instance Connect. This is a one-time requirement for each instance.

RB avatar

i was wondering about that. what are your thoughts on it ?

Kevin Chan avatar
Kevin Chan

Its pretty good so far. I don’t have to deal with passing aroudn ssh

Zach avatar

@Matt Gowie my colleague found that the issue goes away if you resize the terminal window, so he wrote a little script that we’re adding to the default bash profile on the AMIs we build

resize() {
  old=$(stty -g)
  stty -echo
  printf '\033[18t'
  IFS=';' read -d t _ rows cols _
  stty "$old"
  stty cols "$cols" rows "$rows"
}
resize
Matt Gowie avatar
Matt Gowie

@Zach — Ah interesting… I may add that as a note to that small TF module I’ve got to give others a heads up. Thanks for providing your resolution.

Zach avatar

Further followup - AWS Support was able to replicate the issue and it seems to be a bug between the SSM Session logging the SSM KMS Encryption option. When both are turned on, terminal sessions get mangled when using anything that can scroll the shell. They’re looking into it to see if there’s a bug on their end. For now the resize function above is an ok workaround

1

2020-05-22

2020-05-23

2020-05-25

Karoline Pauls avatar
Karoline Pauls

Am i right that Secrets Manager is somewhat limited to showing only 2 secret versions at once? Is it possible to view values of all past secret versions?

2020-05-26

Victor Grenu avatar
Victor Grenu

Hi folks, Lately I’ve updated to v0.3 a pet project who intends to push notifications when you left running AWS instances on the most expensive AWS services. (EC2, RDS, Glue Dev Endpoint, SageMaker Notebook Instances, Redshift Cluster).

I hope you can give a try and get me some feedback about features, bugs, or architecture.

It can be a side-car for Instance Scheduler from AWS, and easier than CloudCustodian. Only informative and dead-simple.

Let me know if it makes sense, or if you have any idea to improve this.

Instance Watcher 

Introduction

AWS Instance Watcher will send you once a day a recap notification with the list of the running instances on all AWS regions for a given AWS Account. Useful for non-prod, lab/training, sandbox, or personal AWS accounts, to get a kindly reminder of what you’ve left running. 

Currently, It covers the following AWS Services:

• EC2 Instances

• RDS Instances

• SageMaker Notebook Instances

• Glue Development Endpoints

• Redshift Clusters

Notifications could be:

• Slack Message

• Microsoft Teams Message

• Email

https://github.com/z0ph/instance-watcher

Features

• Customizable Cron Schedule

• Whitelisting capabilities

• Month to Date (MTD) Spending

• Slack Notifications (Optional)

• Microsoft Teams Notifications (Optional)

• Emails Notifications (Optional)

• Serverless Architecture

• Automated deployment using (IaC)

z0ph/instance-watcher

Get notified for Instances left running across all AWS regions for specific AWS Account - z0ph/instance-watcher

Matt Gowie avatar
Matt Gowie

Hey @Victor Grenu — This is cool stuff! I built something similar for test / experiment accounts that nukes the account using aws-nuke: https://github.com/masterpointio/terraform-aws-nuke-bomber

I’m sure the combo of the two could really ensure the user has a cheap AWS account

masterpointio/terraform-aws-nuke-bomber

A Terraform module to create a bomber which nukes your cloud environment on a schedule - masterpointio/terraform-aws-nuke-bomber

1
Victor Grenu avatar
Victor Grenu

Great, sounds interesting, will take a look.

Karoline Pauls avatar
Karoline Pauls
filter @logStream like /doesnt-exist/
| stats count(*) by @logStream

This CloudWatch insights query walks through all log streams in the selected log group, even though it could exclude log groups by name…

is it really that inefficient? There must be a way.

2020-05-28

RB avatar

how does one create a vpc with both private and public subnets ? our vpc has a public cidr but i cannot add a private cidr to it in order to add private subnets

Zach avatar

Your VPC has a single CIDR range. You carve out blocks within that for the subnet ranges

RB avatar

oh i thought you could add multiple cidr ranges to a vpc

Matt Gowie avatar
Matt Gowie

@RB Since you know Terraform — I’d check out the CP VPC + Dyanmic Subnet modules as a good example.

https://github.com/cloudposse/terraform-aws-vpc https://github.com/cloudposse/terraform-aws-dynamic-subnets

Zach avatar

Uhh yah I think that’s a ‘thing’ but its for a particular use-case

RB avatar

thanks gowiem, ill check those out

RB avatar

zach, i was looking at this yesterday https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html but was seeing errors when adding the cidrs

RB avatar

ill try the terraform approach. it might be easier

RB avatar

seems like the terraform example also only allows for a single cidr block per vpc

Zach avatar

Right, that page is illustrating what I described though. Their example is a VPC with CIDR 10.0.0./16; the public subnet is 10.0.0.0/24 and the private subnet is 10.0.1.0/24. The subnets are formed within the VPC cidr

1
RB avatar

so it might be easier to create a new vpc for private subnets ?

Zach avatar

I think you are misunderstanding how the subnets work

Zach avatar

Note that in the diagrams from the AWS docs that the subnets are drawn within the vpc

Zach avatar

What you’re talking about with multiple CIDR blocks on the VPC itself is more related to this - https://aws.amazon.com/about-aws/whats-new/2017/08/amazon-virtual-private-cloud-vpc-now-allows-customers-to-expand-their-existing-vpcs/

vFondevilla avatar
vFondevilla

You should be using a private CIDR in the VPC, the only distinction between a public subnet and a private subnet is the gateway. You can use multiple CIDRs in the same vpc but it’s only for certain use-cases.

jose.amengual avatar
jose.amengual

stay away from adding multiple CIDR to a VPC it is a nightmare to maintain and debug and it is in some ways not the “proper” way to do networking

RB avatar

ah ok, thanks everyone. it sounds like i can continue to use the same cidr but create a new subnet with a cidrmask and to make it private, just change (or remove) the gateway

jose.amengual avatar
jose.amengual

use “proper” as hacky, short-cutting etc

jose.amengual avatar
jose.amengual

the cloudposse module for dynamic subnets uses the cidr function to do subnetting

cidrsubnet - Functions - Configuration Language - Terraform by HashiCorp

The cidrsubnet function calculates a subnet address within a given IP network address prefix.

party_parrot1
jose.amengual avatar
jose.amengual

it does it automatically for you ( +- some inputs)

loren avatar

we do multiple cidrs on a vpc now. haven’t had any problems yet… was surprisingly easy. we create the vpc first with a cidr sized to just private subnets, routed to a central natgw via the transit gateway. then if the account needs public ingress we attach another cidr to the vpc, add an igw, and carve new subnets out of the new cidr, and create a route table for the new subnets that sets the default gw to the igw

Eric Berg avatar
Eric Berg

I’m having problems setting up a logging account and assigning buckets for S3 logging to buckets in a central account.

I’m setting up logging for our multiple AWS accounts, which are in AWS Organizations. We use a separate account for each client’s infrastructure (all accounts are basically the same…SaaS). I’m trying to set up a logging account and send all logs there, including cloudtrail, Load Balancer, and bucket logging.

Right now, I’m working on the bucket logging, and it appears from this page (https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html) that the source and target buckets must be in the same account, which completely negates any thoughts of sharing log buckets across accounts in the organization.

I have a bucket policy that does grant access to the other accounts in the organization, with this:

{
      "Sid": "AllowPutObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:PutObject",
        "s3:GetObjectAcl"
      ],
      "Resource": [
        "arn:aws:s3:::${local.log_bucket_name_east_2}",
        "arn:aws:s3:::${local.log_bucket_name_east_2}/*"
      ],
      "Condition": {
        "StringEquals": {;
          "aws:PrincipalOrgID":[
            "o-7cw0yb5uv3"
          ]
        }
      }
    },

but, though I can write to that bucket from any of the other accounts in the org, when I try to assign logging to those buckets (which are in the separate logging account), I get this error:

Error: Error putting S3 logging: InvalidTargetBucketForLogging: The owner for the bucket to be logged and the target bucket must be the same.
        status code: 400, request id: FEDF21FAA89472BD, host id: v+LjsDLH+EpNXzcfjxVYiNDLH1HOhY5NpHFbPwGTpketFpeEyUVQxp+7DCCNE9ZDy/vdNUd8+Pg=
Amazon S3 server access logging - Amazon Simple Storage Service

Discusses enabling Amazon S3 server access logging to track requests for access to your S3 buckets.

Carlos R. avatar
Carlos R.

I had the same issue. AFAIK, it is a limitation of that feature. Basically, you can only define a target bucket that is in the same region and account.

If you want to centralize the logs, you have to implement a custom solution.

Amazon S3 server access logging - Amazon Simple Storage Service

Discusses enabling Amazon S3 server access logging to track requests for access to your S3 buckets.

Eric Berg avatar
Eric Berg

So, can I set up a logging account to receive s3 access logs?

msharma24 avatar
msharma24

Hi Everyone, looking for some EMR ETL advise, Use case : When a csv file (appx 1GB size) arrives in S3 - Launch EMR cluster, which will run Spark Script Steps and transform the data, and write into a dynamodb. There will be maximum 10 files that will arrive at a set time window of the day.

I have been thinking to use Event bridge rule - >Step function - > launch EMR cluster with Spot Fleet Task nodes Looking for advise if this is a good architecture?

vFondevilla avatar
vFondevilla

quick question because I can’t find it in the docs… The sticky sessions on a NLB use a cookie or it’s based on Source IP?

Haroon Rasheed avatar
Haroon Rasheed

I would like to have AWS EKS setup using Terraform in such way that we have 2 VPCs. One VPC where I should deploy AWS control plane and other VPC I should have my worker nodes running. Do we terraform suite for this in cloudposse or any other repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We haven’t attempted this configuration before. I was talking to some one at a bank through that ran something similar. It was the first I heard of the pattern. @Andriy Knysh (Cloud Posse) do you think the EKS modules we have would work this way? I don’t see why not - but the e2e automation might be difficult. Might require multiple terraform runs to avoid races.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not possible with managed Node Groups, not in the EKS console, nor in a terraform module (e.g. https://github.com/cloudposse/terraform-aws-eks-node-group) - EKS places the node groups into the same VPC

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

might be possible with unmanaged workers (e.g. https://github.com/cloudposse/terraform-aws-eks-node-group) if you allow communication b/w those two VPCs. Both control plane and the workers need to communicate with each other, so you have to make sure the two VPC can communicate and the security groups of the control plane and the worker nodes have the corresponding rules (e.g. allow from a CIDR block of the other VPC)

Haroon Rasheed avatar
Haroon Rasheed

They are also using unmanaged workers..I dont know how to achieve this using Terraform. I was relying on using cloudposse terraform modules for EKS deployment. But this scenario I am not able see any help on online or could see here in cloudposse..If you could guide me or share some lead on how to achieve it would be really great..

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I don’t know all the details involved, but I believe you have to at least do these two things (more things could be involved, not sure):

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Amazon VPC-to-Amazon VPC Connectivity Options - Amazon Virtual Private Cloud Connectivity Options

Use these design patterns when you want to integrate multiple Amazon VPCs into a larger virtual network.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Cluster and workers security groups: You have to use CIDR blocks from the other VPCs to allow communication b/w the control plane and the workers
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Haroon Rasheed avatar
Haroon Rasheed

Thanks @Andriy Knysh (Cloud Posse) for the links..let me go over it and try to setup something with 2 VPCs.

Chris Fowles avatar
Chris Fowles

I don’t have an answer as to how, but I am very curious as to “why?”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the “why” is a good question regardless of whether you put it into one or two VPCs, you have to allow communication b/w them anyway. I don’t see any benefits of doing two VPCs besides a more complicated setup

Haroon Rasheed avatar
Haroon Rasheed

Coz my customer EKS setup is like that.. Customer business workloads (pods) running under worker nodes on different VPC and control plane is running on different VPC.. I need to replicate that for our testing.

2020-05-29

Haroon Rasheed avatar
Haroon Rasheed

I would like to create EKS unmanaged worker nodes 1 on VPC1 and Worker node 2 on VPC2. Control plane on maybe running on either VPC1 or VPC2. Is it possible to setup something like this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s possible. as we already discussed you have to do this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Create two VPC and add VPC peering between them
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Add the CIDR block of VPC1 to the ingress rule of the SG of VPC2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Add the CIDR block of VPC2 to the ingress rule of the SG of VPC1
Haroon Rasheed avatar
Haroon Rasheed

yep looks similar request..but here was talking about worker nodes in different VPCs. But implementation looks same from your inputs..Also just for the update..Managed to bring up with Cluster and Node1 in VPC1 and Node2 in VPC2 with VPC Peering enabled. It works fine..Only things still not working is, not able to go to bash prompt using kubectl(from local machine) of a POD running in Node2 VPC2. Pod to Pod ping across nodes are working fine..only bash prompt is failing to connect. Will work through that.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you open the entire CIDR blocks as ingresses?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and all ports?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

could be that some ports are not allowed

Haroon Rasheed avatar
Haroon Rasheed

Yep opened all port all traffic with VPC CIDR block on both Node2 n Cluster secgrps. Still no luck.

2020-05-30

2020-05-31

Pierre Humberdroz avatar
Pierre Humberdroz

how are people managing AWS CLI 2FA ? I find it very hard to manage the usage of the cli right now with 2FA enforced

Matt Gowie avatar
Matt Gowie
99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

4
roth.andy avatar
roth.andy

aws-vault

1
Pierre Humberdroz avatar
Pierre Humberdroz

thanks I will check it out.

Jonathan Le avatar
Jonathan Le

it’s great

    keyboard_arrow_up