#aws (2019-10)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS) Archive: https://archive.sweetops.com/aws/

2019-10-31

Matt Gowie avatar
Matt Gowie

Hey folks, Any suggestions for a alternative to beanstalk environment variables for managing environment variables? I still want my application to consume environment variables for config, but using terraform to manage those variables seems like it’s going to be painful.

Is AWS Parameter store solid? Is there an alternative that folks would suggest?

PePe avatar

Chamber+SSM parameter Store

PePe avatar

and you can expose them directly in the Taskdef

PePe avatar

cloudposse have modules for chamber and SSM with examples

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, there are a few ways of doing that using elastic beanstalk:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Store all the ENV vars in SSM (using chamber for example as @PePe pointed out), then during terraform apply read them from the SSM and put them into https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L778
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

In this case, EB will provide those ENV vars to your app automatically

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Store the ENV vars in SSM and let the app read them from SSM using an AWS SDK (if you want to go this route)
Matt Gowie avatar
Matt Gowie

Huh, I’ll look into those. I’ve seen Chamber around before, but haven’t dug in too deeply. I like the idea of reading in at deploy time and then passing to EBS from TF. Good stuff. Thanks gents.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, in TF, when you create some resources like RDS, you can write all required variables (e.g. host, user, password) to SSM directly after you create the resources, e.g. https://github.com/cloudposse/terraform-root-modules/blob/master/aws/grafana-backing-services/aurora-mysql.tf#L194

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, you will have all secrets automatically saved in SSM, and then in EB you read them from SSM and populate ENV vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no user involved, no moving those secrets around

:--1:3
Matt Gowie avatar
Matt Gowie

@Andriy Knysh (Cloud Posse) Do you know of an example of reading in values from parameter store? Realize I can use the “aws_ssm_parameter” data source, but I’m wondering how I can read in all SSM parameters for a particular path /$PROJECT/$ENV/* without having to pass all the names of the parameters I’m storing in as a var.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matt Gowie when you read and use the params in terraform, aws_ssm_parameter is the way to go

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to read many params and use them in your app as ENV vars, you can use something like this https://github.com/cloudposse/geodesic/blob/master/rootfs/etc/init.d/atlantis.sh#L45

cloudposse/geodesic

Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

read all params from a service (namespace) and export them into ENV vars

Matt Gowie avatar
Matt Gowie

Maybe that’s not possible though.

Matt Gowie avatar
Matt Gowie

Wouldn’t be too painful if it wasn’t really…

2019-10-30

Cameron Boulton avatar
Cameron Boulton

Interesting, thanks for sharing

2019-10-29

Juan Cruz Diaz avatar
Juan Cruz Diaz

Este jueves 31/10 tendrá lugar el 8vo capítulo en la serie de webinars de DinoCloud, “Desplegando entornos altamente disponibles en la nube”! En esta edición Nicolás Sánchez (COO) y Juan Diaz (Cloud Architect) comentarán sobre las mejores prácticas a la hora de planear un disaster recovery. Además sortearemos 2000 U$D en vivo!

Registrate gratis aquí: https://www.eventbrite.com.ar/e/desplegando-entornos-altamente-disponibles-en-la-nube-tickets-78054520171

Desplegando entornos altamente disponibles en la nube attachment image

Desplegando entornos altamente disponibles en la nube. Experiencias y mejores prácticas a la hora de planear un disaster recovery.

Andy avatar

GM everyone! Has anyone used Beats Central Management feature with ElasticCloud (Official Elastic hosted on cloud)? I am getting add_cloud_metadata/add_cloud_metadata.go:87 add_cloud_metadata: hosting provider type not detected. error message.

2019-10-28

ruan.arcega avatar
ruan.arcega

Anyone using AWS MSK resource (Apache Kafka), with data (topics, kafka and zookeeper state) stored in bucket ?

Cameron Boulton avatar
Cameron Boulton

@kskewes FWIW I’m doing /17 non-clashing per environment per region with 8 (4 public and 4 private) /19 subnets per region which allows for 4 AZs per region.

kskewes avatar
kskewes

Thanks Cameron! So with a subnet per AZ you can avoid the ASG rebalancing. Put mixed workload types (k8s/vm/etc) into the same subnet, whether public/private?

Cameron Boulton avatar
Cameron Boulton

Right. Generally we always default to using the Private subsets for security unless we have public access requirements: i.e. internet/public load balancers are in the Public subnets.

:--1:1
Cameron Boulton avatar
Cameron Boulton

K8s, EC2/VMs, RDS, etc. are all in Private subnets

Cameron Boulton avatar
Cameron Boulton

And ASG, RDS, etc. are configured to use all the subnet/AZs available in that region for greater availability

Cameron Boulton avatar
Cameron Boulton

I know you said you are likely optimizing for inter-AZ traffic costs, but we do the opposite: optimize for AZ failure/high-availability

kskewes avatar
kskewes

Thanks again. I think for RDS etc we would have all AZ subnets in. For VM and k8s though, ASG per AZ subnet. Ie, split workload into own subnet.

:--1:1
kskewes avatar
kskewes

Outcome would be same (each ASG can scale) but we’d have more management overhead and clearer AZ boundaries.

Cameron Boulton avatar
Cameron Boulton

Right

kskewes avatar
kskewes

Cheers :)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, I agree with @Cameron Boulton. We follow a similar approach in our reference architectures.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You’ll see our examples mostly use terraform-aws-dynamic-subnets module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we have 3 modules for subnet strategies)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub

GitHub is where people build software. More than 40 million people use GitHub to discover, fork, and contribute to over 100 million projects.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the terraform-aws-named-subnets came from a customer requirement we did a few years ago, but isn’t what we generally use.

kskewes avatar
kskewes

Thanks heaps Erik. Be nice if could use security groups in k8s NetworkPolicy objects.

kskewes avatar
kskewes

I think the fact I’m concerned about doing IP based policies is the big difference.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For kubernetes, using a Mesh is the way to achieve greater network security

kskewes avatar
kskewes

yeah, we are a wee way from that

kskewes avatar
kskewes

Even Cillium CNI for L7 http path filtering

Cameron Boulton avatar
Cameron Boulton

Not sure what your capacity planning looks like but that allows for 8,192 address per subnet or 32,768 per VPC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Amazon is saying nothing about the DDoS attack that took down AWS, but others are attachment image

Looks like some security staff were asleep at the switch

Cameron Boulton avatar
Cameron Boulton

Always at the mercy of your weakest link and in process automation, that weakest link is humans

Amazon is saying nothing about the DDoS attack that took down AWS, but others are attachment image

Looks like some security staff were asleep at the switch

Karoline Pauls avatar
Karoline Pauls

theregister is basically a tech tabloid

Karoline Pauls avatar
Karoline Pauls

anyone has a less sensational source?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Catch point maybe?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
AWS Users Blindsided by DDoS Attacks | Digital Experience Monitoring | Catchpoint

Cybersecurity has always been the Achilles heel of the digital world. Digital security protocols, firewalls, and advanced authentication methods have improved and tightened internet security, but even with all these measures in place, cyberattacks are inevitable. You can only mitigate the impact and prevent any major compromise before it turns…

2019-10-27

kskewes avatar
kskewes

Hey everyone, am working through our IP Addressing on a new AWS migration and would be great to hear some thoughts on how to structure public/private subnets please. Curious how others manage. I’m pretty fresh to AWS on a large scale.

Our setup:

  1. We have a mix of Kubernetes, VM and Managed Service (DB/Queuing/etc) workloads. There is communication between them.
  2. We are multi-AZ and are planning to minimise inter AZ traffic. ie: apps target other apps in same AZ via DNS.
  3. We are in multiple regions (identical setup, just different VPC’s with non-clashing addressing, not peered)

AWS subnetting option 1 (my proposal):

  • VPC /16
  • AZ /18 (one per AZ plus one /18 spare)
  • Split an AZ in /20 per workload (assume even distribution, but not important for this chat). eg: /20 for public k8s/loadbalancers, /20 private k8s (nodes/pods), /20 VM’s, /20 managed services. Reality is different sizing but it’s still plenty addresses.
  • Pros: AZ boundary very clear because /18 defines it - for troubleshooting/etc. Suspect we will have separate (k8s and vm) node pools per AZ. We have had odd hacky IP whitelist in current cloud due to lack of auth between apps.
  • Cons: Overhead managing?

AWS subnetting option 2 (Partner proposed):

  • VPC /16
  • Public /17
  • Private /17
  • Pros: Don’t think about IP’s (micro manage). Based on EC2 metadata target az# service, use some selector mechanism in private loadbalancer/security group (We would do this above anyway).
  • Cons: Troubleshooting traffic path will be non-obvious by endpoint name and require AZ membership lookup (potentially retrospectively) ? K8s node pool rebalancing (I’ve heard this is a thing)? Does this apply to ASG’s?

In both cases the Cloudposse named-subnets module looks like a great fit and we would pass in the subnets id in as required.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@kskewes will get back to you tomorrow

kskewes avatar
kskewes

thanks heaps.

2019-10-24

Hasen Ahmad avatar
Hasen Ahmad

hi all, im getting this error, using the ECR module, aws_ecr_repository_policy.default: InvalidParameterException: Invalid parameter at 'PolicyText' failed to satisfy constraint: 'Invalid repository policy provided' is there a fix? do i need to provide the principals_full_access field ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you don’t need to provide it. But if you do, make sure those principal ARNs are correct and exist

Hasen Ahmad avatar
Hasen Ahmad

thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
AWS Chatbot - Amazon Web Services

AWS Chatbot is an interactive agent that makes it easy to monitor and interact with your AWS resources from your team chat room. Learn more about the key benefits and how it works.

Andrew Jeffree avatar
Andrew Jeffree

Yeah I checked that out a month or so ago when they first released it.

Andrew Jeffree avatar
Andrew Jeffree

It’s promising but doesn’t do what I need for now

2019-10-22

Karoline Pauls avatar
Karoline Pauls

There’s a limit of 10 policies per role. Am i doing something wrong if I hit this limit?

Karoline Pauls avatar
Karoline Pauls

I suppose I can concatenate policy statements and build one large JSON instead of defining one policy per statement, each with its own json…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have a policy document aggregator to aggregate multiple policies into one https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator

cloudposse/terraform-aws-iam-policy-document-aggregator

Terraform module to aggregate multiple IAM policy documents into single policy document. - cloudposse/terraform-aws-iam-policy-document-aggregator

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Karoline Pauls ^

Karoline Pauls avatar
Karoline Pauls

i used locals, for, and join

Maciek Strömich avatar
Maciek Strömich

did you considered using groups?

Maciek Strömich avatar
Maciek Strömich

with 10 groups per iam user/role and 10 policies attached to a group you can increase it to 100 policies per user

Maciek Strömich avatar
Maciek Strömich

or you can create inline policies as long as you don’t exceed 10,240 chars in total aggregate policy size

Maciek Strömich avatar
Maciek Strömich

so to answer your question. it depends

Maciek Strömich avatar
Maciek Strömich

aws will never say that you’re doing something wrong. it may be suboptimal from their perspective but it may be perfectly fine from yours

oscar avatar
oscar

What would be the go to way for allowing k8s ExternalDNS to change records on a different account to that of the cluster?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We don’t need that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically you should have branded domains in one account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Those then get cnamed or aliased to the infra domains

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This should almost never have to happen except for cold start

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do this because we don’t want an errant deployment in staging for example to hijack the branded domain

Maciek Strömich avatar
Maciek Strömich

cross account iam access to route53 or k8s zone delegation to the cluster account?

2019-10-21

Adrian avatar
Adrian

you can also stream you logs to elasticsearch in AWS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
AWS Cross-AZ Data Transfer Costs More Than AWS Says - Last Week in AWS attachment image

When it comes to the exact cost cross-AZ data transfer, the AWS documentation is a bit ambiguous. So I ran a little experiment to figure out how much it costs to move data between availability zones in the same region.

3
1

2019-10-20

Daniel Minella avatar
Daniel Minella

Hello everyone, how do you manage ecs container logs? Example, now when a container dies I have to connect at EC2 that hosts ECS service and execute a docker logs xxxx. Which stack or strategy do you use to handle that?

Steven avatar
Steven

You can send the logs to Cloudwatch on ecs/ec2 & fargate. On ec2, there are more logging options

Vlad Ionescu avatar
Vlad Ionescu

Usually they go to CloudWatch and then either to 1) external log system, 2) Lambda function that triggers some action if something happened, or 3) nowhere cause they are fully ignored( observability in the application, tracing with exhaustive context sent to other systems so a bit similar to option 1)

Vlad Ionescu avatar
Vlad Ionescu

Depends on the app and the company and the usecase

2019-10-18

2019-10-17

IvanM avatar
IvanM

Guys I need a bit of help with AWS networking Issue - we have RDS instances running in private subnets in VPC. From our office network we want to be able to always ssh into these instances (without client VPN). How should we do that? I guess we need a site-to-site VPN connection from our local network to the VPC. However how to enable traffic only to the rds instances. I do not want all the Internet network to go via the VPC/VPN So the local network should still have internet connection as is, only that there should be a direct connection possible to the RDS instances in VPC only

Ognen Mitev avatar
Ognen Mitev

As far as I know you cannot directly SSH to RDS instance eq. ssh [email protected]

You can use:

  1. Bastion host and from there - mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com -P 3306 -u mymasteruser -p
  2. Public access to the RDS filtered to your IPs… But you will need MySQL Client (workbench or sqlpro…)
  3. Maybe someone else knows anything else

I would do it via case 1. There are options how to do the Bastion host and so on…

Taras avatar
Taras

+ for case 1. There is no technical possibility(implementation) to SSH into RDS’s servers/instances. Only connect to DB using DB-client like mysql etc.

IvanM avatar
IvanM

sorry, my bad Yeah what I meant is to be abel to connect to rds using a client Issue is that the RDS is in a private subnet and not accessible via Internet

Ognen Mitev avatar
Ognen Mitev

Bastion aka Jump Host will do the thing

:--1:1
Sugananth T avatar
Sugananth T

I had a similar situation, we needed to connect to a server in different VPC and then connect to a Aurora MySQL RDS db with MYSQL connector - ie - like this - mysql -h <mysql–instance-name> -P 3306 -u mymasteruser -p <password>

Sugananth T avatar
Sugananth T

But connecting to a RDS managed DB server via putty or some other ssh or sftp client is not possible as far as i know

Maciek Strömich avatar
Maciek Strömich

what’s the easiest way to clean up an aws account prior to account deletion?

oscar avatar
oscar

AWS nuke

oscar avatar
oscar
gruntwork-io/cloud-nuke

A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke

rebuy-de/aws-nuke

Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke

Maciek Strömich avatar
Maciek Strömich

last time i’ve checked nuke wasn’t supporting all of the services

oscar avatar
oscar
1Strategy/automated-aws-multi-account-cleanup

Automatically clean-up multiple AWS Accounts on a schedule - 1Strategy/automated-aws-multi-account-cleanup

Maciek Strömich avatar
Maciek Strömich

but maybe i need to recheck

oscar avatar
oscar

Not personally used it, just no of it. Can’t comment

Maciek Strömich avatar
Maciek Strömich

sure. i will try it on one of the test accounts which will be deleted

Maciek Strömich avatar
Maciek Strömich

thanks for a tip

Maciek Strömich avatar
Maciek Strömich
Now Available – Amazon Relational Database Service (RDS) on VMware | Amazon Web Services attachment image

Last year I told you that we were working to give you , with the goal of bringing many of the benefits of to your on-premises virtualized environments. These benefits include the ability to provision new on-premises databases in minutes, make backups, and restore to a point in time. You get automated management of your […]

kskewes avatar
kskewes

I see AWS recommend VPC’s of /16 or smaller. Given a /16 is split into further subnets (at least by AZ but potentially further, eg: different app ASG’s, k8s, etc), I’m curious where the /16 recommendation comes from. Any ideas? Hard follow or ignore? https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-sizing-ipv4 EDIT: The allowed block size is between a /28 netmask and /16 netmask.

Alex Siegman avatar
Alex Siegman

So, the /16 is a technical limitation imposed by AWS on VPC size. Where the recommendation comes from, I don’t know, but giving your VPC the largest space possible allows you the most flexibility when it comes to making subnets and such.

Alex Siegman avatar
Alex Siegman

Especially if you’re using EKS, it EATS IP address space fast as every pod is given an IP

:--1:2
kskewes avatar
kskewes

Yeah, thanks Alex, Erik. Have been able to sort out our IPAM for AWS & EKS now.

2019-10-16

roth.andy avatar
roth.andy
AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 18 services in the AWS US East/West and AWS GovCloud (US) Regions | Amazon Web Services attachment image

It’s my pleasure to announce that we’ve expanded the number of AWS services that customers can use to run sensitive and highly regulated workloads in the federal government space. This expansion of our FedRAMP program marks a 28.6% increase in our number of FedRAMP authorizations. Today, we’ve achieved FedRAMP authorizations for 6 services in our […]

2019-10-15

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database | Amazon Web Services attachment image

Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they […]

Alex Siegman avatar
Alex Siegman

imagine being the head project manager on this massive multi-year multi-team migration and closing that last ticket as this is posted. that’s gotta feel good.

Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database | Amazon Web Services attachment image

Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they […]

:100:1

2019-10-14

Milos Backonja avatar
Milos Backonja

Thanks @oscar. I was able to configure this using Route53 Resolver

1
oscar avatar
oscar

Nice one

Milos Backonja avatar
Milos Backonja

definitely awesome stuff. It will allow me to connect on-prem solutions with AWS later on, and to use on-prem private dns server

2019-10-12

oscar avatar
oscar

Route 53 resolver I believe

:--1:1
1

2019-10-11

mmarseglia avatar
mmarseglia

@Phuc i sort of do this with Elasticbeanstalk. I use the module https://github.com/cloudposse/terraform-aws-ecr.git and pass the elasticbeanstalk roles that get created.

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Milos Backonja avatar
Milos Backonja

Hi, If I use transit gw with multiple vpcs attached. Each VPC use its own private DNS zone in Route53. Traffic is working between VPC’s, but is there way to somehow delegate DNS resolving between VPCs?

2019-10-10

Fernando Torresan avatar
Fernando Torresan

I had been facing a problem when I tried to provision a cloudfront (estimate time 18 min) and using aws-vault to work properly I’ve needed to set this flag --assume-role-ttl=1h, like:

aws-vault exec <profile-name> --assume-role-ttl=1h

Matt Gowie avatar
Matt Gowie

Hey folks – IAM Policy questions: What’s the standard operating procedure for dev teams on AWS and requiring MFA? I’ve created a policy to require MFA for all actions so users need to assign an MFA on first login and then on subsequent logins they need to provide MFA before they can do anything in the console, which is what I want. My problem with this is that I can’t distinguish between requiring MFA for console usage vs CLI usage. I’d like to empower devs to push to ECR or use certain CLI functionality without having them put their MFA in every morning.

I have a way to add IAM actions the user is allowed to do via the following policy statement:

{
            "Sid": "DenyAllExceptListedIfNoMFA",
            "Effect": "Deny",
            "NotAction": [
                // Bunch of management actions the user is allowed to do.
            ],
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"
                }
            }
        }

Should I just push all my CLI allowed actions into that NotAction and manage it that way? Or is there a better way?

Alex Siegman avatar
Alex Siegman

I recommend having the ability to change password and manage own MFA available by default, and everything else locked behind having MFA present. Providing all access through assumed roles, that means you only have to lock down role assumption, and the only thing an IAM user is allowed to do is manage their MFA and login

Alex Siegman avatar
Alex Siegman

That said, the second half of your statement, (allowing certain actions via MFA) could easily just be added as allows, since everything is an implicit deny. To my brain “allowing if” is simpler than “denying unless…”

Alex Siegman avatar
Alex Siegman

I don’t think there is a way in IAM to distinguish between API and console access, so you’d have to be okay with it being available in both places without MFA. I mean maybe you could do something with aws:UserAgent but those are spoofable

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also aws-vault in server mode can help reduce the frequency of entering MFA code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is every 12 hours really such a bad thing? :-)

Matt Gowie avatar
Matt Gowie

Got it — Thanks gents. Think I need to do some reading on role assumptions + aws-vault, but overall I think I’ll move forward with supplying explicit allows for things I don’t want to hinder this dev team with and I’ll try to just keep that list short.

Matt Gowie avatar
Matt Gowie


Is every 12 hours really such a bad thing? :-)
Haha I personally don’t think so… but since I’m consulting for a dev agency who is more cavalier about security I just don’t want to rub them the wrong way.

Phuc avatar

Hi guys, Is there anyone familiar with IAM role and Instance Profile ?> I have a case like this: I would like to create an Instance Profile with suitable policy to allow access to ECR repo ( include download image from ECR as well). Then I attach that Instance Profile for a Launch Configuration to spin up an instance. The reason why I mentioned Policy for ECR is that I would like to set aws-credential- helper on the instance to use with Docker (Cred-helper). when it launch, so that when that instance want to pull image from ECR, it wont need AWS credential on the host itself at first. All of that module, I would like to put in Terraform format as well. Any help would be appreciated so much.

2019-10-09

imiltchman avatar
imiltchman

Does anyone know the default duration of the session when using aws-vault?

PePe avatar

1h I think

PePe avatar

but that is not on aws-vault side

PePe avatar

is aws side

PePe avatar

and you can change that with a policy

oscar avatar
oscar
99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

2019-10-01

Yusley Orea avatar
Yusley Orea

Anyone have any recommendations for a tool for disaster recovery on AWS? especially for Aurora, DynamoDB and EBS.

PePe avatar

a tool ? as like something that could do what ?

Yusley Orea avatar
Yusley Orea

centralized cross regions backup for services or at least for RDS. I look for https://aws.amazon.com/backup but it’s region dependency.

AWS Backup | Centralized Cloud Backup

AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.

PePe avatar

I will just use global aurora cluster and backup the main region

PePe avatar

or you can have read replica cluster in other regions/accounts or same account

PePe avatar

so instead of having snapshots you have replicas that can promote in very little time

:--1:1
:100:1
    keyboard_arrow_up