#aws (2022-05)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2022-05-01
2022-05-02
Years ago when I was using ECS on EC2 I used the ASG TERMNATING lifecycle hook to setup a “graceful” termination operation which would drain the ec2 container instance of containers before terminating it. Is this still required with ECS on EC2 in 2022? Or is there more integration between ECS and ASG now?
Yes, this is still required using drain hooks
I havent set this up in a while but these are my notes from then
https://github.com/nitrocode/awesome-aws-lifecycle-hooks
things could have changed since but i havent noticed anything. please let me know if there is an easier way than asg lifecycle hooks
Awesome aws autoscaling lifecycle hooks for ECS, EKS
TLDR: How do you achieve static IPs for a Root Domain hosted behind CloudFront without using Route53 Aliases?
Details: I am working with a client that started with a website running on a single EC2 instance. An Elastic IP (EIP) was associated with the instance. The IP was used to create A records in a third-party DNS for routing the root and the “www” endpoints to the instance.
[root.com](http://root.com), [www.root.com](http://www.root.com) → 3rd-party DNS (A) → EIP → EC2
After much refactoring, the site is now running behind CloudFront and an ALB. The CloudFront endpoint is published as a CNAME for the “www” endpoint and works great. The root, however, is still using the old EIP as a A record because you can’t use CNAMEs with the root.
[www.root.com](http://www.root.com) → 3rd-party DNS (CNAME)→ CloudFront → ALB
[root.com](http://root.com) → 3rd-party DNS (A)→ EIP → EC2 (Redir to www with NGINX)
Of course, the “easiest” (!) way to get the root domain pointed at CloudFront is to create an ALIAS record in Route53. Ha! I say “easiest” because moving the zone from the third-party DNS hosting into Route53 would take far too much effort for this one little redirect. For example, retraining people to use AWS instead of the DNS tool they have been using for years among many, many other potential snares and time sinks.
So I’ve looked at a couple solutions.
The current one works but I don’t want to have to run/manage an NGINX server for redirects. It’s also not highly available; if the server goes offline then redirects will fail. So use an ALB, right?
Since the IPs for ALBs change, but NLBs can have an EIP assigned to them, I tried assigning an EIP to a Network Load Balancer backed by an ALB that listens on ports 80 and 443. The listeners have a rule that redirects the request to “www”. I should add, content doesn’t need to be served from the root domain; it should all come from “www”.
[root.com](http://root.com) → 3rd-party DNS (A)→ EIP -> NLB -> ALB -> Redirect to WWW
This works for the most part but I feel like an NLB and and ALB for redirecting a request is overkill. I figure there has to be a better, cheaper solution. (this one is about $30/month not including traffic which should be pretty minimal)
So I looked at AWS Global Accelerator. This provides static IPs that can be pointed at a few different AWS resources; ALBs are there but sadly not CloudFront (AFAICT).
[root.com](http://root.com) → 3rd-party DNS (A)→ Global Accelerator -> ALB (live site!)
In my early exploration of this, its only working for HTTP requests… not for HTTPS requests. So if someone enters “https://root.com”, the redirect won’t ever happen. Bummer! This one is about $18/month not including traffic.
So before I settle on the EIP->NLB->ALB
approach, I ask the question: How do you achieve static IPs for a Root Domain hosted behind CloudFront without using Route53 Aliases?
AWS Global Accelerator is a networking service that simplifies traffic management and improves performance by up to 60%.
the same old problem
AWS Global Accelerator is a networking service that simplifies traffic management and improves performance by up to 60%.
in the old days we used to have a nginx redirector for this kind of stuff
you can do GA-ALB with a redirect rule and that should work as a redirector
but if you have a very huge network 2 nginx servers with private ips can do the job ( as we did for 100s of millions of request back in the day)
yeah as much as i don’t want to mange it, an EIP pointed at an NLB with EC2s running NGINX attached to it is still a simple, elegant solution.
After thinking for a bit i ended up using the Global Accelerator (which provides 2 static IPs) as a front end for an ALB with listeners doing the forward:
[root.com](http://root.com) -> 3rd-party DNS (A) -> Global Accelerator -> ALB -> [www.root.com](http://www.root.com)
after i implemented this, I google a bit and came across this article. they did the exact same thing I did almost line-for-line of TF code
Since the release of AWS Elastic Load Balancer (ELB) in 2009, system administrators have struggled with the fundamentals of Internet: zone apex and DNS. If you are not that familiar with Domain Name System, let’s start by looking at the internals of domain names: Fully qualified domain name www.
2022-05-03
Hi all!
im trying to create self managed node groups
on EKS using Terraform eks module and terragrun.
I want to add toleration ,taints and labels to each node group, so i tried to use bootstrap_extra_args = "--node-labels=[node.kubernetes.io/lifecycle=spot,node/role=os-client](http://node.kubernetes.io/lifecycle=spot,node/role=os-client)"
and
bootstrap_extra_args = <<-EOT
[settings.kubernetes.node-labels]
ingress = "allowed"
EOT
but none of them create the node group with the labels/taint . someone know what is the right way to do it ? Thanks !!
2022-05-05
Having a strange issue with AWS SSM where I am unable to copy paste into their RDP client - CTRL-V, CTRL-SHIFT-V, and Right-clicking doesn’t seem to work. Has anyone encountered this issue before? For reference, I’m using PopOS 21.10 and the instance is running Windows Server 2022
Any chance the AMI that you are running for Windows is one of the STIG hardened images from AWS?
Effectively RDP clipboard can be disabled by the destination machine
Doesn’t seem to be one of the STIG images but I hadn’t though about that possibility
Is possible to re-enable manually?
Looking though the running processes, I see rdpclip running - restarting it unfortunately doesn’t work
For reference its Microsoft Windows Server 2022 Base
Have anyone seen this before : CannotPullContainerError: ref pull has been retried 5 time(s): failed to copy: httpReadSeeker: failed open: failed to do request: Get https://prod-us-east-1-starport-layer-bucket.s3.us-east-1.amazonaws.com/
Account with no internet using vpc endpoints
that bucket is amazon managed bucket that stores the ERC images
2022-05-09
How do you authenticate your ci/cd if you have MFA enforcement for all access including the CLI?
MFA is mostly for humans (from 3-factor: who you are, what you have, and what you know)
depending on your CI system you could also use OIDC to authenticate with AWS as an extra layer of security
Thanks, I’ll look into both roles and OIDC!
2022-05-10
Hi - Is anyone making use of MQ triggers for Lambda in a private VPC?
To add further context, I’m getting: PROBLEM: Connection error. Your VPC must be able to connect to Lambda and STS, as well as Secrets Manager if authentication is required. You can provide access by configuring PrivateLink or a NAT Gateway. For how to setup VPC endpoints/NAT gateway, please check <https://aws.amazon.com/blogs/compute/setting-up-aws-lambda-with-an-apache-kafka-cluster-within-a-vpc/>
when configuring the event source. Lambda, STS, and SecretsManager Interface VPC endpoints are created and configured in such a way that each can contact the other.
For anyone who has a similar issue in future… through VPC flow logs I’ve worked out how they work and the appropriate SG rules:
- Lambda creates message pollers and attaches them to an ENI (created by Lambda service)
- The poller ENI is assigned the same security group as the event source, in my case the security group assigned to ActiveMQ
- The poller communicates with Lambda, STS and SecretsManager VPC endpoints (if no access to the internet)
- The poller connects to MQ
The only set of sensible (ie not
0.0.0.0/0
) set of rules that I could get it working with were: - MQ SG Egress TCP/443 —> VPC Endpoints SG
- MQ SG Egress ALL —> MQ SG
- MQ Ingress ALL —> MQ SG For some reason, restricting the self referencing rules to just the MQ ports would cause failures
2022-05-12
Hello everyone ! We’ve been facing an issue with some latency-sensitive services that we deployed to EKS and are being exposed using Nginx Ingress Controller. The issue is related with the Conntrack table (used by iptables) filling up and then it starts dropping packages. The solution to this problem is simply increasing the Kernel parameter net.ipv4.netfilter.ip_conntrack_max
to a higher value, piece of cake. As we are using the EKS/AWS maintained AMI for the worker nodes, this value comes predefined with a relatively small value (our services/apps handle several thousands of reqs per sec). We’ve been exploring different ways of properly setting this value, and the most preferred way would be modifying the kube-proxy-config
Config Map, which contains Conntrack specific config
conntrack:
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
The problem is that kube-proxy
is being managed as an EKS add-on, so, if we modify this config, by, let’s say, Terraform, it will be overridden by the EKS add-on. But we don’t want to self-manage Kube Proxy, as that would slow down our fully automatic EKS upgrades that we handle with Terraform.
Any ideas or suggestions ?
You could always open an issue for discussion, and a PR on: https://github.com/awslabs/amazon-eks-ami - they’re pretty friendly
Packer configuration for building a custom EKS AMI
Maybe open an issue on the Containers Roadmap to discuss the configuration with the Kube Proxy add-on: https://github.com/aws/containers-roadmap/issues?q=is%3Aissue+conntrack+is%3Aopen
Thanks so much @tim.j.birkett !
Out of interest, what is the instance size you use and see the issue on? I’m also using the add-on and the EKS “optimized” AMI.
Mostly *.xlarge instances
and a few 2xlarge and 4xlarge instances as well
Same. Are you running a lot of applications accessed over the internet on there? Trying to work out if it’s something I should worry about/look into
Yes, it’s an internet facing application with a lot of traffic (~50K RPS)
Sorry @Santiago Campuzano - another quick question: Are you using ALB or NLB to get traffic to the Ingress Controller?
2022-05-13
2022-05-16
for ses servers. If I use the api to send email… can I get the servers ip addresses? or is it just https://docs.aws.amazon.com/general/latest/gr/ses.html this ?
The following are the service endpoints and service quotas for this service. To connect programmatically to an AWS service, you use an endpoint. In addition to the standard AWS endpoints, some AWS services offer FIPS endpoints in selected Regions. For more information, see
2022-05-17
Hey! I have a question about the MWAA module. When I set it up with an S3 bucket, it seems it gets upset that the bucket doesn’t have a requirements.txt file in it. It seems like a chicken-and-egg problem though, since the bucket is created alongside the MWAA module, so it’s naturally empty at first..
error updating MWAA Environment (dev-jake): ValidationException: Unable to access version <blah> of dev-jake-dags/airflow/dags/requirements.txt
Is there a way to either have the S3 module create an empty requirements.txt file when it creates the bucket, or have the MWAA module accept that the bucket is empty to start with?
it looks like this is a deliberate design decision to keep everything in a single module. The way to use this is:
- Run Terraform, it will create the bucket and then fail when creating the MWAA environment
- Add your requirements.txt to the bucket
- Re-run Terraform, it will work If you don’t like this flow, build the bucket yourself and upload requirements.txt before using this module, and tell the module not to create the bucket itself.
Thank you!
2022-05-18
Hi folks, for aws dynamic subnet, I set public_subnets_enabled
to false, but the module still creates a public subnet anyway. Here the count should have some conditional judgment I guess, like count = var.public_subnets_enabled ? local.subnet_az_count : 0
?
BTW, about the document, for ipv4_cidr_block
should be list(string), but the example, the given value is string.
Yep youre right. It does seem like it should be checking for local.public_enabled
@Lei Mao this has been refactored a lot recently.
care to submit a pr? Have you also checked private_subnets_enabled if it has the intended effect?
No problem, I can create a pr for that, for private_subnets_enabled, yes, all the stuffs are expected.
here is the pr.
what
To check if create public subnet when set public_subnets_enabled false
why
Currently, when set set public_subnets_enabled false, module still creates public subnet, according to the logic of creating private subnet, there should be a check if public_subnets_enabled is false, not creating public subnet.
references
Slack thread: https://sweetops.slack.com/archives/CCT1E7JJY/p1652862041154429
@Lei Mao sorry about that, looks like there is a PR already created to fix this https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/162
np
hmm, it’s a similar PR but different. nvm.
np, BTW when the fix can release?
2022-05-19
I want to know how many EBS snapshots I have over my full organization, can you query this somehow ?
only way I know how is to have an IAM Role that has access to assume role in each of the Org Accounts, then iterate through each account/region accordingly. But, there might be some way to do this through the Billing APIs if you have access to that.
AWS Service Quotas help at all? That might be account specific at all
2022-05-20
I am querying aws config and I get this json output:
[ “{"COUNT(resourceId)":7}” ]
I am looking for a query to just list the number
any ideas ?
Anyone else try the AWS Single Sign-On fro a delegated member account announced yet? I just tried it and I’m wondering if the bug I’m experiencing is only on my end. I can’t open the permission sets page as a user with with AdministratorAccess in the delegated member account.
2022-05-23
Hey friends. I’ve launched my cluster with https://github.com/cloudposse/terraform-aws-eks-cluster . It’s running great. However, I’m trying to integrate the IAM roles for service accounts for a given deployment, to remove the dependency of the instance profiles. However, it seems to not be working. It looks like everything is done correcting in the link below but I’m not sure. The article mentions something like, Amazon EKS Pod Identity Webhook
but I’m not seeing anything here that might indicate that its installed.
Has anyone set up SA roles in a new cluster with the CP modules? Did it work following their doc above? If not, what was required to get it working?
Terraform module for provisioning an EKS cluster
I don’t use this module, but it’s a service run on the AWS control plane node, you should see a webhook like this:
k get mutatingwebhookconfigurations.admissionregistration.k8s.io pod-identity-webhook -o yaml
On pod creation it intercepts and uses this webhook that amazon look after<https://127.0.0.1:23443/mutate>
Terraform module for provisioning an EKS cluster
we create IAM roles for k8s Service Accounts for all our components that we deploy to EKS clusters created using the module
Terraform module to provision an EKS IAM Role for Service Account
then when we deploy a helm release with terraform, we create the IAM role for Service Account https://github.com/cloudposse/terraform-aws-helm-release/blob/master/main.tf#L19
module "eks_iam_role" {
@Eamon Keane Thanks, I do see that it exists.
Hmm. Okay. So there must be a PEBCAK here.
then in top-level components, we specify the permissions for the role https://github.com/cloudposse/terraform-aws-components/blob/master/modules/alb-controller/main.tf#L26 (those permissions are specific for the component, e.g. alb-controller
)
iam_role_enabled = true
My pods aren’t getting the variables injected despite following that aws article above.
in this case, it prob assumes the EC2 instance profile roles, not the role provided for the service account
It is picking up the instance roles.
Completely disregarding the role associated to the SA role.
Everything looks like it should be working but…it isnt. It sounds like this is an issue coming from the k8s side but I’m not sure how to properly troubleshoot this one.
it involves a few steps https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html, including creating a Service Account with the IAM role annotation
In 2014, AWS Identity and Access Management added support for federated identities using OpenID Connect (OIDC). This feature allows you to authenticate AWS API calls with supported identity providers and receive a valid OIDC JSON web token (JWT). You can pass this token to the AWS STS
I provided that link above. All steps are handled.
and setting serviceAccountName:
on the Pod or Deployment etc.
I followed this: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html
My OIDC:
aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///"
oidc.eks.us-east-2.amazonaws.com/id/abcdefhijklmnopqrstuvwxyz123456
Trust for arn:aws:iam::123456789012:role/my-service-role
:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/oidc.eks.us-east-2.amazonaws.com/id/abcdefhijklmnopqrstuvwxyz123456"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-2.amazonaws.com/id/abcdefhijklmnopqrstuvwxyz123456:sub": "system:serviceaccount:my-service:my-service",
"oidc.eks.us-east-2.amazonaws.com/id/abcdefhijklmnopqrstuvwxyz123456:aud": "sts.amazonaws.com"
}
}
}
]
}
SA my-service
in NS my-service
apiVersion: v1
imagePullSecrets:
- name: registry-credentials
kind: ServiceAccount
metadata:
name: my-service
namespace: my-service
annotations:
eks.amazon.com/role-arn: arn:aws:iam::123456789012:role/my-service-role
A sample job that I’m trying to use to test.
apiVersion: batch/v1
kind: Job
metadata:
name: eks-iam-test
namespace: my-service
spec:
template:
metadata:
labels:
app: eks-iam-test
spec:
serviceAccountName: my-service
containers:
- name: eks-iam-test
image: amazon/aws-cli:latest
args: ["sts", "get-caller-identity"]
securityContext:
fsGroup: 1000
restartPolicy: Never
This returns the instance profile.
When I get the ENV for the aws-node pods, as per the article, I do not get the AWS_ROLE_ARN
nor AWS_WEB_IDENTITY_TOKEN_FILE
kubectl exec -n kube-system aws-node-q9z5c -- env | grep AWS
Defaulted container "aws-node" out of: aws-node, aws-vpc-cni-init (init)
AWS_VPC_K8S_PLUGIN_LOG_LEVEL=DEBUG
AWS_VPC_CNI_NODE_PORT_SUPPORT=true
AWS_VPC_K8S_CNI_EXTERNALSNAT=false
AWS_VPC_K8S_CNI_LOGLEVEL=DEBUG
AWS_VPC_ENI_MTU=9001
AWS_VPC_K8S_CNI_CONFIGURE_RPFILTER=false
AWS_VPC_K8S_CNI_LOG_FILE=/host/var/log/aws-routed-eni/ipamd.log
AWS_VPC_K8S_CNI_RANDOMIZESNAT=prng
AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false
AWS_VPC_K8S_CNI_VETHPREFIX=eni
AWS_VPC_K8S_PLUGIN_LOG_FILE=/var/log/aws-routed-eni/plugin.log
here’s a diagram if it helps.
review
By default, only containers that run as root have the proper file system permissions to read the web identity token file. You can provide these permissions by having your containers run as root, or by providing the following security context for the containers in your manifest. The fsGroup ID is arbitrary, and you can choose any valid group ID. For more information about the implications of setting a security context for your pods, see Configure a Security Context for a Pod or Container in the Kubernetes documentation.
also, in this command kubectl exec -n kube-system aws-node-q9z5c -- env | grep AWS
, why are you using the system container and not yours? (the system container is not related to the service account you assign to your pod)
you need to find your pods kubectl get pods -n xxx
and then use your pod in the command kubectl exec -n xxxx <my-pod> -- env | grep AWS
I did that, no values.
Note step 4 in the link I gave you, it has you do it for the aws-node
.
I’m about to bring on aws support but I’m seeking some answers here given I launched the cluster with the CP module. I did every step in that guide provided.
For all intents and purposes, it should be working but it seems like the cluster is not properly injecting the envs where it should be.
Is there maybe an arg that i’m missing from the module that restricts the IAM roles for SA? I didn’t see one but that doesn’t mean much.
(the doc says aws-node
is an example, so you should not use the same name)
I assumed that since it’s an example. the aws-node
would also have said values. But in any event, I did it on my own troubleshooting pod. The env vars are not injected.
I assumed that since it’s an example. the aws-node
would also have said values
no, b/c that pod is not assigned your Service account
that does make sense.
try turn on control plane logging, I think it should show something.
https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
Amazon EKS control plane logging provides audit and diagnostic logs directly from the Amazon EKS control plane to CloudWatch Logs in your account. These logs make it easy for you to secure and run your clusters. You can select the exact log types you need, and logs are sent as log streams to a group for each Amazon EKS cluster in CloudWatch.
fwiw:
apiVersion: v1
imagePullSecrets:
- name: registry-credentials
kind: ServiceAccount
metadata:
name: my-service
namespace: my-service
annotations:
eks.amazon.com/role-arn: arn:aws:iam::123456789012:role/my-service-role
The issue ended up being the annotation. Specifically. [eks.amazon.com](http://eks.amazon.com)
…it should be amazonaws.
. Classic…. hope you threw the laptop across the room
I threw myself across the room.
We’re testing out AWS AFT (Account factory for Terraform), is anyone else using it?
Yes, we are using it and very happy with it.
I used the Philips module to set it up.
Only surprise we got was deleting an account isn’t easy. So if you were planning to spin up account temporarily as you would with a dev or preview environment it won’t work. Deleting an account is done manually as it’s a long enough process.
Do you have an url to the Philips module?
Are you doing a lot of customizations?
Sorry, philips module was the github self hosted runner.. this is what I got for AFT
AWS: https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/ AWS GitHub: https://github.com/aws-ia/terraform-aws-control_tower_account_factory Step by step guide on Youtube, step 1 https://www.youtube.com/watch?v=dIv9dPcuehc and step 2 https://www.youtube.com/watch?v=eYWIn7-waP0 Hashicorp: https://learn.hashicorp.com/tutorials/terraform/aws-control-tower-aft
Last link is probably the best
We don’t do a lot of customizations.. We use AFT to prepare the accounts for deploying our app into.
I’m using it. Use the default names for the customization repositories. Also make sure you’ve requested an AWS Organizations service limit increase if you intend to create more than 10 organizations otherwise your 11th account creation will fail and you’ll be confused as to why.
Also, the latest version of AFT (1.3.7) reports errors in those Lambdas to the aft-failure-notifications
SNS topic. You should subscribe to those to be notified of failures.
Right, we’re using slightly different names (prefixed), but haven’t encountered issues due to it so far. What kind of issues may we get?
I think you’ll be fine if you use the aft-
prefix. I don’t recall the logs calling that out so we created repo’s without it and nothing worked because the iam roles were expecting that prefix
Have any of you found any good information/documentation on how to pass information to terraform, say from a script? There is a pre- and post- helper scripts, but i can’t seem to get values passed to the terraform job. Tried both manipulating environment variables as well as files
As I understand, AFT creates the accounts within confines of Control Tower. However, what do you use to create Control Tower itself, the OU’s and Service Control Policies, etc.? I’m interested in IaC instead of console/ClickOps for the base setup that must come before AFT can be used. (I’m a terraform noob).
We did the initial parts as ClickOps. Still working on our sandbox-environment and will try to find out what we can do as IaC and not
2022-05-24
2022-05-26
does anyone know of a way (using a lambda in account Y) to watch the cloudtrail event stream in other accounts to process them?
you can use cloudwatch events to trigger the lambda in any account, presuming the lambda permissions allow it to be triggered by the calling account
or you can have eventbus send events to a “central” eventbus in the same account as the lambda, and have cloudwatch events in that same account trigger the lambda
we need to watch cloudwatch events from our customers accounts and the process them either in their account but ideally in our account so we don’t have to get them to create too much infra.
hey all, was wondering if anyone had some good criteria for why you would create an eventbus and not just use the default? Is it just for logically partitioning rules and events for easier identification?
As we deploy out CI/CD pipeline I’m leaning towards creating an event bus for ec2 state changes, but would be interested in how others are using event buses? Thanks.
Found using EventBridge and listening/acting on the events we care about worked for us
@Dan Herrington https://www.youtube.com/watch?v=gu_-e8WPqF0&t=3123s
2022-05-27
2022-05-31
When using AWS Organizations, is it possible to delegate AWS Cost Management and Billing such that I can view the consolidated billing in a member account rather than in the management account?
AFAIK This is not a feature from AWS, the management account will always be the consolidated billing account.
AWS has been building more “delegated admin” services, so maybe we’ll see it in the future
Good to know, thanks Ben!
We’ve wanted to do this as well - but alas it’s not possible.
Anyone has a good tutorial on how to add people to eks clusters? https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html This hasn’t been very helpful for me. Currently I just add users manually to configmap, but it’s really not something I want to be doing long term.
Learn how to grant cluster access to IAM users and roles.
I’m sure there is a better way than this but the way I did it before was with an IAM role per group of people. Folks can just assume the IAM role when they need to create their kubeconfig. So obviously this means you are giving people access to assume a particular role in IAM. So maybe a slightly different problem?
Learn how to grant cluster access to IAM users and roles.
Hopefully someone can share a quick/easy solution that ties back to github or something similar.
Single sign-on for infrastructure