#refarch (2024-05)
Cloud Posse Reference Architecture
2024-05-05
Hey everyone, in the reference architecture - is there some idea of an internal account for internal tools, like a kubernetes cluster with centralized grafana, for example? Which account would it be? corp? auto? or something else?
yes absolutely, we have several internal accounts that we refer to as “core” accounts. We deploy both auto and corp accounts but nowadays typically only deploy auto unless someone has a specific use case for corp. We primarily use the EKS cluster in auto for GitHub self-hosted runners, but that’d be a great place to deploy grafana as well. Although at the moment we’re using AWS managed grafana
Yes, it comes down to cost. How many “tools” type accounts make sense and how much infra you want to run in there.
So, we’ve gradually consolidated more and more “tools” type functionality into an account we generally call “auto”. Historically, we also deployed an corp
account with an EKS cluster, for “corporate facing” apps, but do so less these days.
2024-05-06
2024-05-07
I’ve noticed something when reading the docs/readmes for cloudposse terraform modules - the list of inputs can get really busy with all the contextual inputs that it becomes hard to find the the actual module-specific inputs. Has anyone ever talked about reorganizing the readmes to separate the inputs that are part of the context into a separate table?
Yes,
We want to redo the template for terraform-docs
so that we a) move context parameters to a separate section b) avoid tables
With the reliance more and more on complex objects, tables just don’t cut it
yeah, it would be nice to be able to stretch the window big and actually see everything
Yea, agreed. Good feedback.
We’re working a lot on improving docs, and such, but I cannot say yet when that work will be scheduled. We have an internal task already for it.
cool. thanks
2024-05-15
Is there a way to set an ALB idle_timeout
when deployed through the ecs
module? I found this in the alb
module but I don’t see how to configure that same setting for an ecs
defined ALB
https://github.com/cloudposse/terraform-aws-components/tree/main/modules/alb#input_idle_timeout
Our ECS component, which creates the cluster and the loadbalancers for the cluster includes a module for the alb. It looks like it’s missing the idle_timeout
var as a pass through from the module declaration but that wouldn’t be hard to change
if you are using our components, i’d suggest vendoring down the ECS Component, and making that change locally (we’d love a PR too ).
ok thanks! I’ll check it out
I got this working. How can I open a PR for your repo?
2024-05-16
2024-05-17
Hi,
trying to deploy privileged stack for AWS-Config, under superadmin role, got error:
Planning failed. Terraform encountered an error while generating this plan.
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::xxx:role/xx-core-gbl-root-tfstate-ro) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
@Dan Miller (Cloud Posse) @Ben Smith (Cloud Posse)
It’s hard to determine from this message alone, but I might guess that your session for your superadmin user expired. Can you double check?
aws sts get-caller-identity
2024-05-20
Hi, I’d like to use the
custom_bastion_hostname:
vanity_domain:
outlined in the readme for the bastion
component, but it doesn’t appear that those are valid inputs/variables when I go to deploy it. Am I missing something?
https://github.com/cloudposse/terraform-aws-components/tree/main/modules/bastion
@Jeremy White (Cloud Posse)
hi any updates on this?
Another question I had on this - what’s the recommend way for injecting public keys into the Bastion host for users to access via SSH? Right now, we have to do it manually each time a new Bastion instance is created
We’ve more or less stopped using keys, when we can and instead use AWS Systems Manager Session Manager. That way you don’t need to expose the bastion publicly and just use IAM
There’s no one way to use a bastion - what’s your use-case?
So, the easiest way I could think to restore this is to leverage a lambda.
Here’s an example that uses a tag on the EC2 instance to determine what the FQDN will be:
https://serverlessrepo.aws.amazon.com/applications/arn<img src="/assets/images/custom_emojis/aws.png" alt="aws" class="em em--custom-icon em-aws"serverlessrepo884430845962:applications~Update-Route53-Record-to-Ec2-PublicIp-Python3>
to be clear, that example uses cloudformation, so you would need to translate the small IAM policy here into the policy var for our component here
This component is responsible for provisioning Lambda functions.
Also the cloudformation leverages Event Bridge for running the lambda when an ec2 is running
.
Unfortunately, the cloudwatch_event_rules
variable on our lambda is detached. That said, you can quickly edit the lambda component to attach the rule using this terraform example , and instead of using the schedule
parameter for the rule, set a event_pattern
similar to the cloudformation template.
I would like to run an AWS lambda function every five minutes. In the AWS Management Console this is easy to set up, under the lambda function’s “Event Sources” tab, but how do I set it up with Ter…
quick summary:
• create a lambda component
• copy in the sample lambda code (python)
• add three terraform resources for event, event target, and event permissions
• add policy json to permit the lambda to discover ec2 public ips, tags, and update route53
Thanks, this is helpful! I will look into this
Our use case for this is SSH tunnels. Users (and some services) need to SSH tunnel through the Bastion Host in order to access some RDS databases
I think you can SSH tunnel with an SSM session, but that won’t work for all of the situations we need to use it for
If we do utilize the SSM route more, won’t we also have to keep updating the instance ID in our documentation every time a new bastion is created?
It probably won’t happen very often I suppose
Is the bastion the right approach? Why not use our Client VPN module instead?
So much simpler.
We are using that as well
VPN -> Bastion -> RDS
Why not VPN -> RDS? What is the value-add of the bastion?
additional layer of security, single point of entry that we can quickly and easily take down in the event of a compromise
ClientVPN - single point of entry that you can quickly and easily take down in the event of a compromise
I suppose so, this was a security decision that I was not part of though, so I’m working with what we have
Also, have you considered solutions like Tailscale? Very inexpensive, more tailored to this use-case.
I’ll look into that as well
I was also wondering if it’s possible to put an ALB in front of the Bastion host
For enterprises, they usually use Zscaler or StrongDM for this.
and then have a DNS hostname attached to the ALB, use that to distribute traffic to the Bastion
2024-05-21
2024-05-28
2024-05-29
Not sure if refarch or #kubernetes question… https://docs.cloudposse.com/components/library/aws/eks/cluster/ example says that running addons on fargate is not recommended, and a managed node group is preferred. What’s wrong with fargate?
This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate
if you deploy karpenter to fargate and use karpenter to provision all other nodes, then those nodes will never be available before the cluster component is deployed. if the nodes arent available when creating the cluster, then we arent able to ensure the addons are fully functional during the cluster deployment.
this came up for us when we wanted to enable a CoreDNS addon but werent provisioning a node group initially. We needed to create a managed node group with the basic addons we needed first, and then allow karpenter to scale up additional nodes from there
This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate
I think AWS recommends running coreDNS on fargate too. Need to check, thank you!