#aws (2019-10)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2019-10-01
Anyone have any recommendations for a tool for disaster recovery on AWS? especially for Aurora, DynamoDB and EBS.
a tool ? as like something that could do what ?
centralized cross regions backup for services or at least for RDS. I look for https://aws.amazon.com/backup but it’s region dependency.
AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.
I will just use global aurora cluster and backup the main region
or you can have read replica cluster in other regions/accounts or same account
so instead of having snapshots you have replicas that can promote in very little time
2019-10-09
Does anyone know the default duration of the session when using aws-vault?
1h I think
but that is not on aws-vault side
is aws side
and you can change that with a policy
A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault
2019-10-10
I had been facing a problem when I tried to provision a cloudfront (estimate time 18 min) and using aws-vault to work properly I’ve needed to set this flag --assume-role-ttl=1h
, like:
aws-vault exec <profile-name> --assume-role-ttl=1h
Hey folks – IAM Policy questions: What’s the standard operating procedure for dev teams on AWS and requiring MFA? I’ve created a policy to require MFA for all actions so users need to assign an MFA on first login and then on subsequent logins they need to provide MFA before they can do anything in the console, which is what I want. My problem with this is that I can’t distinguish between requiring MFA for console usage vs CLI usage. I’d like to empower devs to push to ECR or use certain CLI functionality without having them put their MFA in every morning.
I have a way to add IAM actions the user is allowed to do via the following policy statement:
{
"Sid": "DenyAllExceptListedIfNoMFA",
"Effect": "Deny",
"NotAction": [
// Bunch of management actions the user is allowed to do.
],
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
Should I just push all my CLI allowed actions into that NotAction
and manage it that way? Or is there a better way?
I recommend having the ability to change password and manage own MFA available by default, and everything else locked behind having MFA present. Providing all access through assumed roles, that means you only have to lock down role assumption, and the only thing an IAM user is allowed to do is manage their MFA and login
That said, the second half of your statement, (allowing certain actions via MFA) could easily just be added as allows, since everything is an implicit deny. To my brain “allowing if” is simpler than “denying unless…”
I don’t think there is a way in IAM to distinguish between API and console access, so you’d have to be okay with it being available in both places without MFA. I mean maybe you could do something with aws:UserAgent
but those are spoofable
Also aws-vault in server mode can help reduce the frequency of entering MFA code
Is every 12 hours really such a bad thing? :-)
Got it — Thanks gents. Think I need to do some reading on role assumptions + aws-vault, but overall I think I’ll move forward with supplying explicit allows for things I don’t want to hinder this dev team with and I’ll try to just keep that list short.
Is every 12 hours really such a bad thing? :-)
Haha I personally don’t think so… but since I’m consulting for a dev agency who is more cavalier about security I just don’t want to rub them the wrong way.
Hi guys, Is there anyone familiar with IAM role and Instance Profile ?> I have a case like this: I would like to create an Instance Profile with suitable policy to allow access to ECR repo ( include download image from ECR as well). Then I attach that Instance Profile for a Launch Configuration to spin up an instance. The reason why I mentioned Policy for ECR is that I would like to set aws-credential- helper on the instance to use with Docker (Cred-helper). when it launch, so that when that instance want to pull image from ECR, it wont need AWS credential on the host itself at first. All of that module, I would like to put in Terraform format as well. Any help would be appreciated so much.
2019-10-11
@Phuc i sort of do this with Elasticbeanstalk. I use the module https://github.com/cloudposse/terraform-aws-ecr.git and pass the elasticbeanstalk roles that get created.
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
Hi, If I use transit gw with multiple vpcs attached. Each VPC use its own private DNS zone in Route53. Traffic is working between VPC’s, but is there way to somehow delegate DNS resolving between VPCs?
2019-10-12
2019-10-14
Nice one
definitely awesome stuff. It will allow me to connect on-prem solutions with AWS later on, and to use on-prem private dns server
2019-10-15
Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they […]
imagine being the head project manager on this massive multi-year multi-team migration and closing that last ticket as this is posted. that’s gotta feel good.
Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they […]
2019-10-16
It’s my pleasure to announce that we’ve expanded the number of AWS services that customers can use to run sensitive and highly regulated workloads in the federal government space. This expansion of our FedRAMP program marks a 28.6% increase in our number of FedRAMP authorizations. Today, we’ve achieved FedRAMP authorizations for 6 services in our […]
2019-10-17
Guys I need a bit of help with AWS networking Issue - we have RDS instances running in private subnets in VPC. From our office network we want to be able to always ssh into these instances (without client VPN). How should we do that? I guess we need a site-to-site VPN connection from our local network to the VPC. However how to enable traffic only to the rds instances. I do not want all the Internet network to go via the VPC/VPN So the local network should still have internet connection as is, only that there should be a direct connection possible to the RDS instances in VPC only
As far as I know you cannot directly SSH to RDS instance eq. ssh user@rdsinstance…
You can use:
- Bastion host and from there - mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com -P 3306 -u mymasteruser -p
- Public access to the RDS filtered to your IPs… But you will need MySQL Client (workbench or sqlpro…)
- Maybe someone else knows anything else
I would do it via case 1. There are options how to do the Bastion host and so on…
+
for case 1
. There is no technical possibility(implementation) to SSH into RDS’s servers/instances. Only connect to DB using DB-client like mysql etc.
sorry, my bad Yeah what I meant is to be abel to connect to rds using a client Issue is that the RDS is in a private subnet and not accessible via Internet
I had a similar situation, we needed to connect to a server in different VPC and then connect to a Aurora MySQL RDS db with MYSQL connector - ie - like this - mysql -h <mysql–instance-name> -P 3306 -u mymasteruser -p <password>
But connecting to a RDS managed DB server via putty or some other ssh or sftp client is not possible as far as i know
what’s the easiest way to clean up an aws account prior to account deletion?
AWS nuke
A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke
Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke
last time i’ve checked nuke wasn’t supporting all of the services
Automatically clean-up multiple AWS Accounts on a schedule - 1Strategy/automated-aws-multi-account-cleanup
but maybe i need to recheck
Not personally used it, just no of it. Can’t comment
sure. i will try it on one of the test accounts which will be deleted
thanks for a tip
Last year I told you that we were working to give you , with the goal of bringing many of the benefits of to your on-premises virtualized environments. These benefits include the ability to provision new on-premises databases in minutes, make backups, and restore to a point in time. You get automated management of your […]
I see AWS recommend VPC’s of /16 or smaller. Given a /16 is split into further subnets (at least by AZ but potentially further, eg: different app ASG’s, k8s, etc), I’m curious where the /16 recommendation comes from. Any ideas?
Hard follow or ignore?
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-sizing-ipv4
EDIT: The allowed block size is between a /28 netmask and /16 netmask.
So, the /16 is a technical limitation imposed by AWS on VPC size. Where the recommendation comes from, I don’t know, but giving your VPC the largest space possible allows you the most flexibility when it comes to making subnets and such.
Especially if you’re using EKS, it EATS IP address space fast as every pod is given an IP
Yeah, thanks Alex, Erik. Have been able to sort out our IPAM for AWS & EKS now.
2019-10-18
2019-10-20
Hello everyone, how do you manage ecs container logs? Example, now when a container dies I have to connect at EC2 that hosts ECS service and execute a docker logs xxxx. Which stack or strategy do you use to handle that?
You can send the logs to Cloudwatch on ecs/ec2 & fargate. On ec2, there are more logging options
Usually they go to CloudWatch and then either to 1) external log system, 2) Lambda function that triggers some action if something happened, or 3) nowhere cause they are fully ignored( observability in the application, tracing with exhaustive context sent to other systems so a bit similar to option 1)
Depends on the app and the company and the usecase
2019-10-21
you can also stream you logs to elasticsearch in AWS
When it comes to the exact cost cross-AZ data transfer, the AWS documentation is a bit ambiguous. So I ran a little experiment to figure out how much it costs to move data between availability zones in the same region.
2019-10-22
There’s a limit of 10 policies per role. Am i doing something wrong if I hit this limit?
I suppose I can concatenate policy statements and build one large JSON instead of defining one policy per statement, each with its own json…
we have a policy document aggregator to aggregate multiple policies into one https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator
Terraform module to aggregate multiple IAM policy documents into single policy document. - cloudposse/terraform-aws-iam-policy-document-aggregator
@Karoline Pauls ^
i used locals, for
, and join
did you considered using groups?
with 10 groups per iam user/role and 10 policies attached to a group you can increase it to 100 policies per user
or you can create inline policies as long as you don’t exceed 10,240 chars in total aggregate policy size
so to answer your question. it depends
aws will never say that you’re doing something wrong. it may be suboptimal from their perspective but it may be perfectly fine from yours
What would be the go to way for allowing k8s ExternalDNS to change records on a different account to that of the cluster?
We don’t need that
Basically you should have branded domains in one account
Those then get cnamed or aliased to the infra domains
This should almost never have to happen except for cold start
We do this because we don’t want an errant deployment in staging for example to hijack the branded domain
cross account iam access to route53 or k8s zone delegation to the cluster account?
2019-10-24
hi all, im getting this error, using the ECR module, aws_ecr_repository_policy.default: InvalidParameterException: Invalid parameter at 'PolicyText' failed to satisfy constraint: 'Invalid repository policy provided'
is there a fix? do i need to provide the principals_full_access
field ?
you don’t need to provide it. But if you do, make sure those principal ARNs are correct and exist
thanks
AWS Chatbot is an interactive agent that makes it easy to monitor and interact with your AWS resources from your team chat room. Learn more about the key benefits and how it works.
Yeah I checked that out a month or so ago when they first released it.
It’s promising but doesn’t do what I need for now
2019-10-27
Hey everyone, am working through our IP Addressing on a new AWS migration and would be great to hear some thoughts on how to structure public/private subnets please. Curious how others manage. I’m pretty fresh to AWS on a large scale.
Our setup:
- We have a mix of Kubernetes, VM and Managed Service (DB/Queuing/etc) workloads. There is communication between them.
- We are multi-AZ and are planning to minimise inter AZ traffic. ie: apps target other apps in same AZ via DNS.
- We are in multiple regions (identical setup, just different VPC’s with non-clashing addressing, not peered)
AWS subnetting option 1 (my proposal):
- VPC /16
- AZ /18 (one per AZ plus one /18 spare)
- Split an AZ in /20 per workload (assume even distribution, but not important for this chat). eg: /20 for public k8s/loadbalancers, /20 private k8s (nodes/pods), /20 VM’s, /20 managed services. Reality is different sizing but it’s still plenty addresses.
- Pros: AZ boundary very clear because /18 defines it - for troubleshooting/etc. Suspect we will have separate (k8s and vm) node pools per AZ. We have had odd hacky IP whitelist in current cloud due to lack of auth between apps.
- Cons: Overhead managing?
AWS subnetting option 2 (Partner proposed):
- VPC /16
- Public /17
- Private /17
- Pros: Don’t think about IP’s (micro manage). Based on EC2 metadata target az# service, use some selector mechanism in private loadbalancer/security group (We would do this above anyway).
- Cons: Troubleshooting traffic path will be non-obvious by endpoint name and require AZ membership lookup (potentially retrospectively) ? K8s node pool rebalancing (I’ve heard this is a thing)? Does this apply to ASG’s?
In both cases the Cloudposse named-subnets module looks like a great fit and we would pass in the subnets id in as required.
@kskewes will get back to you tomorrow
thanks heaps.
2019-10-28
Anyone using AWS MSK resource (Apache Kafka), with data (topics, kafka and zookeeper state) stored in bucket ?
@kskewes FWIW I’m doing /17 non-clashing per environment per region with 8 (4 public and 4 private) /19 subnets per region which allows for 4 AZs per region.
Thanks Cameron! So with a subnet per AZ you can avoid the ASG rebalancing. Put mixed workload types (k8s/vm/etc) into the same subnet, whether public/private?
Right. Generally we always default to using the Private subsets for security unless we have public access requirements: i.e. internet/public load balancers are in the Public subnets.
K8s, EC2/VMs, RDS, etc. are all in Private subnets
And ASG, RDS, etc. are configured to use all the subnet/AZs available in that region for greater availability
I know you said you are likely optimizing for inter-AZ traffic costs, but we do the opposite: optimize for AZ failure/high-availability
Thanks again. I think for RDS etc we would have all AZ subnets in. For VM and k8s though, ASG per AZ subnet. Ie, split workload into own subnet.
Outcome would be same (each ASG can scale) but we’d have more management overhead and clearer AZ boundaries.
Right
yes, I agree with @Cameron Boulton. We follow a similar approach in our reference architectures.
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
You’ll see our examples mostly use terraform-aws-dynamic-subnets
module
(we have 3 modules for subnet strategies)
GitHub is where people build software. More than 40 million people use GitHub to discover, fork, and contribute to over 100 million projects.
the terraform-aws-named-subnets came from a customer requirement we did a few years ago, but isn’t what we generally use.
Thanks heaps Erik.
Be nice if could use security groups in k8s NetworkPolicy
objects.
I think the fact I’m concerned about doing IP based policies is the big difference.
For kubernetes, using a Mesh is the way to achieve greater network security
yeah, we are a wee way from that
Even Cillium CNI for L7 http path filtering
Not sure what your capacity planning looks like but that allows for 8,192 address per subnet or 32,768 per VPC
Looks like some security staff were asleep at the switch
Always at the mercy of your weakest link and in process automation, that weakest link is humans
Looks like some security staff were asleep at the switch
theregister is basically a tech tabloid
anyone has a less sensational source?
Catch point maybe?
Cybersecurity has always been the Achilles heel of the digital world. Digital security protocols, firewalls, and advanced authentication methods have improved and tightened internet security, but even with all these measures in place, cyberattacks are inevitable. You can only mitigate the impact and prevent any major compromise before it turns…
2019-10-29
Este jueves 31/10 tendrá lugar el 8vo capítulo en la serie de webinars de DinoCloud, “Desplegando entornos altamente disponibles en la nube”! En esta edición Nicolás Sánchez (COO) y Juan Diaz (Cloud Architect) comentarán sobre las mejores prácticas a la hora de planear un disaster recovery. Además sortearemos 2000 U$D en vivo!
Registrate gratis aquí: https://www.eventbrite.com.ar/e/desplegando-entornos-altamente-disponibles-en-la-nube-tickets-78054520171
Desplegando entornos altamente disponibles en la nube. Experiencias y mejores prácticas a la hora de planear un disaster recovery.
GM everyone!
Has anyone used Beats Central Management
feature with ElasticCloud (Official Elastic hosted on cloud)?
I am getting add_cloud_metadata/add_cloud_metadata.go:87 add_cloud_metadata: hosting provider type not detected.
error message.
2019-10-30
Interesting, thanks for sharing
2019-10-31
Hey folks, Any suggestions for a alternative to beanstalk environment variables for managing environment variables? I still want my application to consume environment variables for config, but using terraform to manage those variables seems like it’s going to be painful.
Is AWS Parameter store solid? Is there an alternative that folks would suggest?
Chamber+SSM parameter Store
and you can expose them directly in the Taskdef
cloudposse
have modules for chamber and SSM with examples
also, there are a few ways of doing that using elastic beanstalk:
- Store all the ENV vars in SSM (using
chamber
for example as @jose.amengual pointed out), then duringterraform apply
read them from the SSM and put them into https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L778
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
In this case, EB will provide those ENV vars to your app automatically
- Store the ENV vars in SSM and let the app read them from SSM using an AWS SDK (if you want to go this route)
Huh, I’ll look into those. I’ve seen Chamber around before, but haven’t dug in too deeply. I like the idea of reading in at deploy time and then passing to EBS from TF. Good stuff. Thanks gents.
also, in TF, when you create some resources like RDS, you can write all required variables (e.g. host, user, password) to SSM directly after you create the resources, e.g. https://github.com/cloudposse/terraform-root-modules/blob/master/aws/grafana-backing-services/aurora-mysql.tf#L194
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
in this case, you will have all secrets automatically saved in SSM, and then in EB you read them from SSM and populate ENV vars
@Andriy Knysh (Cloud Posse) Do you know of an example of reading in values from parameter store? Realize I can use the “aws_ssm_parameter” data source, but I’m wondering how I can read in all SSM parameters for a particular path /$PROJECT/$ENV/*
without having to pass all the names of the parameters I’m storing in as a var.
@Matt Gowie when you read and use the params in terraform, aws_ssm_parameter
is the way to go
if you want to read many params and use them in your app as ENV vars, you can use something like this https://github.com/cloudposse/geodesic/blob/master/rootfs/etc/init.d/atlantis.sh#L45
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
read all params from a service (namespace) and export them into ENV vars
Maybe that’s not possible though.
Wouldn’t be too painful if it wasn’t really…