#aws (2019-02)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2019-02-01
in ECS, with autodiscovery ALB.. is it possible to set the subdomain name to be different from the service name?
2019-02-04
I’ve been trying to figure out how to ‘architect’ the stack for our apps. we have 4 different services. This is a Ruby on Rails stack.
-
- the main app, public dns,
-
- auth, public dns.
- 3 background scheduler - private dns needs to be accessible from main app.
-
- another app with private dns to be accessible by main app.
Fargate, RDS, elasticache.
I guess i need two ALBs. one private, and one public.
— questions — Is there any advantage/benefit to throwing Traefik in there somewhere?
- if so, what would the stack look like (a secondary ecs cluster just for traefik? or stick traefik in the same cluster as the apps?)
Is there a tutorial / guide on how to use SSM Parameter Store to get ENVVARS into each task/container on deploy?
@i5okie this sounds like a pretty contained app, I’m not sure what Traefik would add to it being more of an edge proxy
so just the dual ALB public and private?
Is auth an internal app in the above?
Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition.
yeah just a microservice type thing
yeah make that an internal alb
thank you
2019-02-05
Anyone using 99designs/aws-vault
with packer? set the variables to use the aws env vars
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
but aws just spits it back out Build 'amazon-ebs' errored: error validating regions: AuthFailure: AWS was not able to validate the provided access credentials 99 aws-vault
Works fine with terraform though
You also need to set 2 more variables
I forget exactly, but they are AWS*SESSION_*
AWS_SECURITY_TOKEN=
AWS_SESSION_TOKEN=
AHH I did wonder if i needed the session. Terraform just kinda works (outside of geodesic)
Thanks; naming it aws-vault is a killer as all the results are aws (hashicorp) vault related
i know!
bad name
couldn’t they just have called it aws-assume-role
lol almost as bad as another tech we use called EventStore. Sure… name your product after the actual bloody pattern
one day terraform will produce errors that don’t fill me with rage
the token = AWS_SESSION_TOKEN
bit sorted packer out. Thanks for that Terraform automagically picks up env vars. Damn the inconsistency
more confused after poking through other peoples shit in github as they’re using the same thing.
We are looking for implementing transit vpc. Most third party solutions (Cisco, Aviatrix etc.) are expensive. Has anyone used AWS transit gateway? If yes, how was the experience?
We decided not to use because of the increased cost over using VPC peering
I see
In our case there are multiple vpc and giving vpn access to Transit VPC makes more sense
So evaluating this option
It simplifies things but at additional cost for all traffic. So depends use case, If only for user VPN, it could make sense. But we needed that as well as apps going cross accounts for shared resources
You may find this useful: https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-sharing-a-new-approach-to-multiple-accounts-and-vpc-management
My first interaction with AWS was immediately after the launch of the Asia Pacific (Sydney) AWS Region, just a bit over 6 years ago. Back then, the AWS Management Console had fewer services, and I quickly found the Amazon Virtual Private Cloud (VPC). In under 10 minutes, I could define a new VPC, with subnets, […]
2019-02-06
Thanks
Hello, I’m looking to vpc sharing as a way to reduce VPN costs, but I already implemented a vpc transit solution using FRR with BGP and Strongswan (IPsec) https://frrouting.org/
How does VPC Sharing fit in with the practice of having separate AWS accounts for each environment? As an example, would it be a bad idea to share a VPC between Staging and Production?
@Igor VPC sharing is not meant to be cross environment. The idea is to a) centralize management/billing from one account 2) make account boundaries transparent per environment. So for example if dev environment has components spread across multiple AWS accounts then all those components can be part of a single VPC (which earlier required separate VPC)
@Pablo Costa I am curious how are you creating/updating transit vpc. Does Strongswan give any tools to do it or you are manually (from aws console or terraform) creating all required routing?
Amazon WorkSpaces Self-Service Portal. Contribute to eeg3/workspaces-portal development by creating an account on GitHub.
This looks pretty cool
2019-02-13
Hey, did anyone upgrade dc1 to dc2 redshift clusters using Cloudformation? I know there are 2 paths using snapshot-restore and elastic resizing but I don’t want to cause a drift from CF state doing it manually and I wonder if CF does handle the upgrade in a data-loss-less way
FYI if anyone is interested upon dc1 to dc2 redshift upgrade via cloudformation data is being migrated. the only thing to remember is that during the migration cluster is available only in read only mode
Automatically monitor your AWS service usage and receive notifications as you approach limits.
Morning People, anyone can help me with how ECS decides to place the tasks when multiple ordered_placement_strategy
blocks are used?
Given the following example:
I’ve 4 tasks ( dockers ) and 2 machines in 2 different AZs
ordered_placement_strategy {
type = "spread"
field = "attribute:ecs.availability-zone"
}
ordered_placement_strategy {
field = "memory"
type = "binpack"
}
Will ECS stop bringing tasks up because it won’t be able to do so because the first placement block says different AZ or something entirely different will happen? Anyone got any ideas ?
Are they 4 tasks of 1 service? could you explain a bit more
SO
- 1 service
- 4 tasks
- 2 instances
And it places 2 and stops?
Or you are just asking?
If asking, then it should put 2 Tasks per Instance
damn this slack doesn’t notify me when I get replies from a thread…
Ok so the situation is even weirder because we are using awsvpc mode and it has restrictions on ENIS…
meaning that the spread based on AZ is not needed
because t2.small can have only 2 tasks per machine ..
it supports 3 ENIs 1 for the machine 2 for tasks
so sticking with memory only just in case the app has a memory leak.
and with 2 machines and 4 tasks we get an even spread.
tbh I do not like awsvpc mode, it kills the docker vibe
Yeah, no idea why using that
2019-02-14
The cloudfront docs on which types of certs are valid is very confusing It seems you can use an ECC cert between cloudfront <—> origin but only an RSA cert between viewer <—> cloudfront but i can’t find this explicitly stated anywhere
hrm…. not something we’ve run in to
(that said, we have some modules for cloudfront and acm)
2019-02-18
do you observe RDS hostname resolution issues in us-east-1?
2019-02-19
When you waste time writing code around aws config … then look at the pricing.
@chrism Wrote https://github.com/alphagov/pay-aws-compliance a while back to help with auditing of AWS resources (and some others) - run as a scheduled Lambda - cheap!
Contribute to alphagov/pay-aws-compliance development by creating an account on GitHub.
currently running https://github.com/toniblyx/prowler
AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool. It follows guidelines of the CIS Amazon Web Services Foundations Benchmark and DOZENS of additional checks …
which is pretty nice
I’ll add the alpha gov one to my list
Prowler looks pretty full featured, but bash!
plops it in a report tab; though I need to look at the background colour of the html at some point.
Nice
Can hook up Lambdas to SNS to send email reports etc too
cuts down on the drudgery
It’s a nice tool
Looks good
Drowning in things analysing aws; though Im yet to get anything of value from GuardDuty beyond 3rd parties appreciating it as a tick box
Never looked at GuardDuty
It’s impossible to keep up with all the AWS services hah
they certainly dont miss a trick; just wish they’d have a chat with other areas of aws before building them
annoying levels of disconnect between products; you set your multi region cloudtrail… and they bring out guard duty that has to be enabled in each regions. Oh and you can hook it up to another account so you can read from your root account the guard duty events of prod/testing etc…. but you have to do it per region, per account via invites
… one of those moments you terraform the creation and realise you’re going to have to go hand-ball all the invite acceptance
They almost got it right, but then didnt
Pretty cheap though compared to most aws things so it has that going for it
meh think i’ll just configure the hell out of existing scanners. AWS configs one of those things that looks like it’ll drive me up the f’ing wall rather than save me time
I was looking at https://github.com/cloudposse/terraform-aws-cloudwatch-flow-logs but it doesnt seem to fit with the ref-architecture way of splitting audit off / storing to s3. its just flowlog>kinesis>CW
Terraform module for enabling flow logs for vpc and subnets. - cloudposse/terraform-aws-cloudwatch-flow-logs
its kinda either or; I may be overthinking this
Not sure on that one tbh, but have you looked at flowlogs or thought about what you wanna do with them?
Have used mainly as a checkbox exercise before
Creating/managing alerts/dashboards with them is quite expensive in terms of mgmt cost
Yeah tbh the idea seems utterly pointless for the most point
Lets put it this way, if you’re having to dig into tb’s of flow logs to find evidence of something; you’ve missed a bigger step elsewhere
How else are you going to find out how much data was exfiltrated from your environment for realz?!
nothing like a terraform-github safari to find how other people dealt with stuff, and realising you’ve been here before
lol
reaches for the star already starred
seriously though, storing flowlogs in something like $LOGGING_SAAAS_PROVIDER generates so much data, is not cheap to do anything proactive with the data
cloudwatch would be cheaper but limited
More worrying is when our SIEM tool hooks on, and reprocesses the same data
You know your siem monitoring is worth the cash when you terraform up 30 machines in vsphere; tear them all down and they alert you a day later
Well worth the email notification at 7am on a saturday
GuardDuty will also look for compromised EC2 instances talking to malicious entities or services, data exfiltration attempts, and instances that are mining cryptocurrency.
well at least guard duty covers the ex-filtration side; though id be interested to know how that works if its someones shitty app code being abused to haul data out
Bet it won’t pick up exfiltrating to an attacker owned S3 bucket
“its in aws, everything is awesome”
best place to pull data to; aws would take an age to lock you out and the transfers that fast you’ve less chance of the owner locking you out before you get it
Need to watch out for that too if using S3 endpoints and aren’t locking down your buckets it bypasses any egress proxies you may have in place
Found that out during a pentest :D
we only really have public stuff in buckets; outside of the whole snowplow usage and thats all anonomised noise
keeping data in SAAS, aka flick switch for encryption, flick switch for firewall, flick switch for logs
after years of terraform its nice at times to be able to keep shit pretty secure and not have to spend days tweaking bollocks to get there
pretty secure, as nothing is totally secure
I need to look at s3 bucket security again as they love to change things
The UI change was nice, more explicit options to not allow buckets to become public by accident
Don’t wholly understand why they’re terrified of making it default-on
the IAM stuff around S3 is pretty powerful; ridiculously so compared to Azures
yeah but too many folks left things open which leads to breaches
I mean why they don’t default it to most secure and make people open it up. They’ve made the UI better but it wont stop people doing dumb shit
I mean I know it wont I’m still sat skimming tls updates for bucket names
ah, hadn’t spent too much time poking in the UI recently
our original account is old as f’; so where no terraform lives, one has to spelunk in aws ui.
Compared to azure portal though everythings a dream
hah, so I’ve heard
terraform and azure sounded like a dream come true then I used it in anger for a month lots of anger lots and lots
3 machine cluster 22minutes… still not complete TIMES THE HELL OUT
wasn’t very impressive
2019-02-21
AWS Firecracker is tiny, efficient, fast, and might redefine the virtual machine. Here’s what you need to know about this AWS product.
I have been getting a lot of We currently do not have sufficient {instance-type} capacity in the Availability Zone you requested {zone}.
messages in EC2. Is this common? Does AWS address these capacity issues?
this is where RIs help
…as they provide capacity reservations
@Igor can’t say we’ve been seeing it lately, but it highly depends on (a) the region you’re operating in (b) the type of instance
for example, we’ve seen this in the past when AWS has had a regional zone failure in a region and everyone auto scales out to the other zones
So I need to purchase RIs for the specific AZ?
yes, the “capacity” reservations are tied to the AZ, but the cost savings span all AZs
though AWS has been revamping this, so maybe it’s easier now?
At least that’s something
@Erik Osterman (Cloud Posse) Do you know if there is a way to attach the RIs to an ASG? I purchased a couple, but they seem to immediately be used up on existing running instances.. which makes sense in retrospect
RIs are not a directly addressable resource from the EC2 perspective
it’s a billing instrument
Yeah, but when it comes to capacity
I want PROD instances to take priority over non-PROD, if that makes sense
but they are not running in the same account, right? so that won’t happen
i am not sure how to prioritize RI capacity reservations within an account
I am locked for some customers due to not being able to move around NAT elastic IPs, and whitelisting changes are a pain
#HarshRealityOps ?
yea, there are in the end always some things outside of our control
the NAT IP thing comes up regularly
2019-02-27
has anyone seen something to enforce arbitrary limits on the number of specific kinds of resources which match a given tag?
basically a policy enforcement engine for aws which would let us say untagged instances are automatically destroyed
if the count of instances with tag Developer=foobar
, is greater than X, then alert
if the count of instances with tag Developer=baz
is greater than Y, then kill until less than Z
something like https://github.com/RiotGames/cloud-inquisitor
Enforce ownership and data security within AWS. Contribute to RiotGames/cloud-inquisitor development by creating an account on GitHub.
2019-02-28
AWS Organizations and AWS SSO setup guide here: https://github.com/osulli/aws-sso-setup
A guide on how to setup AWS Organizations and AWS SSO and an example permissions matrix. - osulli/aws-sso-setup
Do you know if this works with things like aws-vault
?
A guide on how to setup AWS Organizations and AWS SSO and an example permissions matrix. - osulli/aws-sso-setup
@pecigonzalo I don’t really see why not. The only trouble is if you create a resource on one account you have to create policies to share it. So for instance, I can share my S3 bucket storing my state files to all my accounts reasonably easily… but my dynamodb that contains the lock hash… not proving so easy to allow access from other accounts!
Well im not sure, since I could not find any docs to get SSO to work with aws cli even, with assume roles or something
If you setup WSL via the store (as they say you should now) geodesic fails to map the path :grimacing: :gun:
It assumes its still in the local/lxss
when in reality its in AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc
what Not working on some WSL environments Here is a list of the changes I had to make to get root.cloud.posse script to run on WSL: In root.*.com: DOCKER_NAME was using $NAME environment variable. …
Does this help?
Guess this needs to be smarter
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
joys of changing
infact
simpler
the if/else just needs reversing; check for canonical … but i imagine that might have unexpected consiquences
or msft need to stop being random af
I’m taking the lazy way out and just blowing the folder away seeing ms’ command for removing lxss full didn’t actually do what anyone of reasonably sound mind would call “FULLY remove”
kinda stuff that makes you think ubuntu desktop maybe the future; then you remember how much that blows
AWS v2 providers out
Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.
aka as good a time as any to add that version = “1.60.0” to your provider
Terraform AWS Provider Version 2 Upgrade Guide
@chrism any chance we could get your help improving that path extrapolation? I’d be happy to jump on a call
Think it might be worth a ticket on WSL to add some sort of inbuilt var of winpath
He asks, and the lord microsoft giveth
already exists
wslpath -wa .
in WSL returns the windows path to the current folder
Source code behind the Windows Subsystem for Linux documentation. - MicrosoftDocs/WSL
Been around a while too
I shall give that a stab
$ wslpath -w ~~~ wslpath: /path/to/home: Result not representable Is this intended behavior? I expect following result: $ wslpath -w ~~~:\Users\mkt\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu…
Hah! Damn… looked like a good option
Just windows being windows. Getting in the way.