#aws (2018-12)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2018-12-04
anyone have any good alternatives to https://github.com/tmobile/pacbot ? I’ve tried getting pacbot setup numerous times and no luck.
PacBot (Policy as Code Bot). Contribute to tmobile/pacbot development by creating an account on GitHub.
2018-12-05
@Gabe not really asset mgmt, but CVE complaince checking - have used https://github.com/future-architect/vuls before which was pretty nice
Vulnerability scanner for Linux/FreeBSD, agentless, written in Go - future-architect/vuls
and re compliance for AWS account stuff - https://github.com/alphagov/pay-aws-compliance
Contribute to alphagov/pay-aws-compliance development by creating an account on GitHub.
Vuls runs on every node for CVE aggregation and dumps reports to S3. pay-aws-compliance runs as a scheduled lambda function against an AWS account and pulling these reports and emails operators if something is not in compliance.
awesome thanks @joshmyers i’ll check them out
It doesn’t look as pretty as pacbot
haha but i’m sure it works better
i’ve tried once a week every week to install pacbot and every time i get an error
2018-12-06
2018-12-11
Recreantly migrated our Postgres database to RDS Postgres using DMS and while there were some gotchas it was an excellent experience!
@daveyu I’m happy to answer any questions if you have them.
You didn’t by chance happen to migrate from Heroku, did you?
Nope. Self managed EC2 instance but I can’t imagine it would be much different (though I haven’t used Heroku Postgres) as DMS just uses the postgres connection.
Found this. Damn. https://stackoverflow.com/questions/46939176/migration-from-heroku-postgresql-to-aws-rds-using-aws-data-migration-service-dm
I’m trying to use AWS DMS to copy and replicate data from a Heroku PostgreSQL database to an AWS RDS PostgreSQL instance but it isn’t working (more infos below). In DMS Log I can see the following
Needs super user
one of the primary reasons for moving off heroku becomes an obstacle
hah, yea
I think this works in Heroku’s favor: vendor lock-in
@daveyu
@daveyu has joined the channel
TIL don’t try to use the s3 bucket url (even though AWS only auto completes this) in a Cloudfront distribution pointing to a s3 website… be sure to use the s3 website endpoint url!
It’s easy to know you are doing it wrong… you will be greeted with an Access Denied XML page on your cloudfront distribution url if you did the former
2018-12-12
2018-12-13
@Erik Osterman (Cloud Posse) have you setup an EC2 health check feed into slack before?
yes - quite easily done
or wait, i thought you mean the status page for EC2
(rss)
… if that’s the case, just use the /feed
command to add the RSS feed (after adding the app)
we do something like this for #releases
No, instance health/status
Looking for any tips before we dive in
so that would be SNS notifications, no?
Yes, and presumably a lambda to post
cloudwatch -> sns -> lambda
ya
I was hoping you would say: absolutely, heres our a proven TF module
Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack. - cloudposse/terraform-aws-sns-lambda-notify-slack
so we’ve used this module to send SNS alarms to slack
so now, you just gotta whip up the cloudwatch part
we have some examples for that. sec.
@sarkis wrote them
Terraform module to create CloudWatch Alarms on ALB Target level metrics. - cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms
EC2 health check
hrm…
yes, one sec
Terraform Module for providing a general EC2 instance provisioned by Ansible - cloudposse/terraform-aws-ec2-instance
this implements one for StatusCheckFailed_Instance
now need to glue the parts together. maybe someone else has written a module though… haven’t looked
(check the registry)
ya, thats my next step
do let me know what you find
it would be sweet to get those notices into slack
2018-12-19
Enable access to your VPC and on-premises network from anywhere, on any device.
Finally
apparently it’s openvpn behind the scenes
2018-12-20
They promised to deliver it before holidays during this re:invent. Looks promising! $108 per single user full-day connection at least. Not sure how comparable it is to existing OpenVPN setup which has to be maintained.
lol, that’s scary expensive
A good Openvpn module is missing, but they have an okish commercial byol and it would suck for them to have something free competing.
we use the pfsense ami in aws that uses the openvpn client
its not a bad setup
and theres an aws vpn integration piece
If I understand correctly you pay per time you actually use VPN per user, so it should be cheaper on week-ends, for example. Still expensive?
I have not used VPN on AWS for the last 3 years Last time I used it in Russia where linkedin was blocked
you pay for the instance and then pay for the software/hr depending on instance size
If you use it together with a few alarms it’s probably totally ok. What you don’t want is that johnny and his team forget about the vpn connections and just add the office manager’s salary to the monthly bill
pfsense’s one cost $288 in total per month, which is $0,34/hour
the caveat to this, is that im sure you pay for the internet traffic hitting the vpn etc. but thats where you would want to use a split tunnel
t2.medium $0.08 $0.046 $0.126
But wait,, this means tha tif the instance is down then the vpn is down as well ?
yea so y ou want to try to setup HA with it
exactly, as typical service you have to manage…
truue
I’ll be waiting for lambda vpn
pfsense also comes with other stuff liek a FW and whatnot
not just manage but, it can become a bottleneck due to network limit per instance type.
I wishlisted AWS Lambda with GPU support, didn’t happen…
so It doens’t really sound like a cool thing to have to route all the traffic through one instance.
yea i guess it depends on your budget cause im sure the aws service isnt cheap
where the link for the lamda vpn thing
it was a joke
that is what we are trying to compare now $288 for manage it yourself for unlimited users vs $108 per single user, no management requried.
i guess you could also look at https://www.netgate.com/support/
From configuration assistance to ensuring minimal downtime for mission-critical circuits, we have the expertise to assist you for any size network.
probably talk them down in price
hmmm then , taking into consideration the management costs and possible incidents should be taken into account.
and that is hard to convert into money usually
yes, true, for such critical piece as VPN tend to be
2018-12-24
When updating the Launch Configuration on an AutoScaling group, it seems like the only way to complete the deployment is to change the min instances to double the size, wait for the new ones to spin up, and then change the min back to what’s expected. Is there a better way to accomplish this?
Looks like CloudFormation has support for that
Yes, I don’t believe there’s a way to use launch configurations to trigger a rebuild of an ASG.
Change ASG name to be recreated every time new LC is created… see var.recreate_asg_when_lc_changes
- https://github.com/terraform-aws-modules/terraform-aws-autoscaling/blob/master/main.tf#L34
Terraform module which creates Auto Scaling resources on AWS - terraform-aws-modules/terraform-aws-autoscaling
Clever fix for this
Terraform module which creates Auto Scaling resources on AWS - terraform-aws-modules/terraform-aws-autoscaling
This should be used with wait_for_elb_capacity & create_before_destroy to avoid leaving the target group with no instances, right?
If I remember correctly now wait_for_elb_capacity is optional, while create_before_destroy is set to true already in the module.
I’ll do a test and get back to you, but I suspect without wait_for_elb_capacity, there won’t be any healthy instances on the target group during the cut-over (which may or may not be okay depending on the environment).
2018-12-26
Would someone be interested to do a project together on the ASG Launch Configuration update. Thinking of a module with a Lambda / Step functions. A full ASG recreate is just too heavy sometimes.
That would be pretty neat @maarten .
Cool, I was thinking there are proably similarities updating EKS workers and ECS nodes. Maybe there is a way to catch both of them.
so if they use the same core module
… that would help
but for kubernetes, technically we’d want to cordon the node first, then drain it - for a smooth rolling update
maybe push amount of running services per instance to cloudwatch, and use that to make decisions with for further rolling
What about autoscaling lifecycleHooks that should also do the trick imho.
Have you seen our latest terraform external artifact module? Makes it easy to distribute and deploy complex lambdas without baking zips into git
I did, although I never disliked very small zips inside a repo and I find external urls confusing for people who’re not used to the Cloudposse eco-system, but we can talk about it .
The problem is fundmentally that we cannot rely on a “local” toolchain required for packaging (E.g. npm dependencies)
the artifact module supports local files (e.g. file://
) as well as remote.
I guess if the CI system built the zip and committed back to the repo, that could work too
but I don’t want users committing zips to repos b/c it puts the onus on us to verify every zip
Or utilize Codebuild, but then it gets massive
hrm
yes, that’s true - could use codebuild approach
albeit heavy handed - but at least self-contained
A type of Dockerfile for Lambda’s would have been helpful.
yea, seriously
one day…
When singularity is reached.
Very small node scripts which just work with AWS don’t need packages, not sure what you think of that.
yea, when that’s the case - no need to use the external artifacts
but requiring single nodejs scripts with no dependencies, is an undue limitation
yes
I wish it was possible to attach artifacts to a commit in github the way it’s possible to attach them to a release
2018-12-27
Novice question. Why do the ephemeral ports on an inbound ACL rule for a private subnet need to have a source of 0.0.0.0/0, when the traffic is coming from a NAT Gateway that should be running within the same VPC on a public subnet? Setting the source to the VPC CIDR doesn’t seem to work.
@Igor is your question related to a specific module or in general ?
general question
Ah ok, are you talking NACL or Security Group now btw. For both I don’t have an answer tbh, would love to replicate your issue however.
I do know that a NLB will most likely send the traffic through as if it would originate from the source. So for NLB I see the point.
But I was under the impression that one can change the rules the way they want them, without limitations, so I’d like to know what’s going on .
I’m talking about NACLs. I was stuck on an issue with my config, and setting the source to 0.0.0.0/0 on private subnet NACL fixed the issue for me. I referred to https://docs.aws.amazon.com/vpc/latest/userguide/vpc-recommended-nacl-rules.html and sure enough, it recommends this configuration as well
Use the network ACL rules we recommend to provide an additional layer of security for your subnets.
Hardening with NACLs is tricky, and I personally would never start working on them unless someone is really requesting me to do so. Even when I had AWS consultants over auditing multiple times, NACLS were never part of the audit.
What is the reason you want to use them ?
Security, but I am also trying to make sense of how the AWS NAT Gateway works.
I’m pretty sure things are locked down regardless of ACL rules given that there is no route to the IGW from the private subnet
is the nat gw working for you now ?
Yup, got it to work by changing the source of the NACL rule for ephemeral ports.
I don’t think NAT is being translated both ways normally.
so you would still receive from an outside IP and you would need to allow traffic from it in your private subnet
Makes sense