#aws (2023-10)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2023-10-01

Zing avatar

how are you all handling IAM roles/permissions in aws sso? we’re moving to it, and the introduction of permission sets is making me ponder the best way to architect the whole IAM lifecycle.

  1. what’s the best way to assign a team/group to a role to an account?
  2. what about the ad-hoc cases where a permission set is too permissive or restrictive?
Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s tough. Unlike a typical LDAP / AD system where you add permissions to a user’s single profile, you give users access to a set of roles which they can only use one at a time. For developers, they might have a single unique role for their squad which is deployed to dev, and a more limited one in prod. For cross-functional teams (infra, sec, arch), they might have many roles deployed across many accounts. Admin, readonly, breakglass, poweruser, etc etc.

We tried using very granular roles, eg one role per service. This was too confusing for devs. Now we’ve settled on one role per dev which is named after your squad. All squad members have the same permissions.

We manage the assignments with Terraform. It’s gnarly, but works well enough. There are so many resources we’ve split the configuration into tens of stacks now, and still sometimes run into issues with AWS API rate limiting

Joe Perez avatar
Joe Perez

things also get tricky when the permissions differ in non-prod/prod accounts

Sean avatar

you can use more than one at a time, but in your browser you’d need an extension to help you manage that.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Every AWS API call takes a single set of credentials. You can send different calls with different credentials, but you can’t combine permission sets into a single credential

1

2023-10-02

2023-10-03

Sairam Madichetty avatar
Sairam Madichetty

HI everyone.wave Has anyone worked on Grafana mimir for prometheus HA solution? I’m stuck in the configuration of grafana mimir, prometheus and S3 integration.

2023-10-05

Aritra Banerjee avatar
Aritra Banerjee

Hi everyone, has anyone implement Palo Alto Firewall in AWS. We have to implement that and I wanted to know how much difficult it is to implement it using a gateway load balancer. We only have a single VPC where everything is present

rohit avatar

Has anyone ever deployed a complete “product” on top of EKS/K8s in a customer account/environment? Something my company is working on, and historically we used a bunch of tf and scripts, but it doesn’t seem feasible given the assurances we need.

Eamon Keane avatar
Eamon Keane

not directly, but there’s a few examples you might be aware of:

https://docs.astronomer.io/astro/resource-reference-aws-hybrid https://release.com/ https://redpanda.com/blog/kafka-redpanda-future

Most of the ones I’ve come across seem to have a central control plane in their own account and then eks as data plane in customer account.

rohit avatar

Yeah the split plane approach is something we thought of also, but this entire EKS cluster and our product would live in their environment. No specifics. But we’re leaning towards CloudFormation / mix of cdk to attempt this POC.

Hans D avatar

Doing the deployment on k8s using something like argocd/flux.

Eamon Keane avatar
Eamon Keane

doesn’t sound like something to be undertaken lightly and would need to be priced highly and hopefully cluster is stateless.

I think Astronomer put a lot of work into their terraform deployment scripts but with state in Aurora (see e.g. https://github.com/astronomer/terraform-aws-astronomer-aws). Release is more about ephemeral environments and went with eksctl . Cloudformation/cdk could do the job, or if there’d be a lot of environments crossplane or argo with a kubernetes cluster-provider could be an option.

Not sure about Redpanda, their requirments are more exacting as they have state: https://docs.redpanda.com/current/deploy/deployment-option/self-hosted/manual/high-availability/

In a disaster, your secondary cluster may still be available, but you need to quickly restore the original level of redundancy by bringing up a new primary cluster. In a containerized environment such as Kubernetes, all state is lost from pods that use only local storage. HA deployments with Tiered Storage address both these problems, since it offers long-term data retention and topic recovery.
susie-h avatar
susie-h

Question here - how can i reference the iam role created by this module in the api gw i create using this module? Terraform has an argument for cloud_watch_role_arn in their resource api-gw-account , but i can’t see how to do that with the cp module. thanks in advance!

susie-h avatar
susie-h

I’m looking to apply this setting in api gw

RB avatar

The first module referenced is the api gateway sub module for its account map which creates the iam role and outputs that as role_arn

The second module referenced is the parent api gateway module that does not take a role arn as an input so nothing to reference from the sub module.

Perhaps the submodules role is used implicitly by the aws api to write to cloud watch log groups?

Seems like the submodule creates a role which is used

RB avatar


To grant these permissions to your account, create an IAM role with apigateway.amazonaws.com as its trusted entity, attach the preceding policy to the IAM role, and set the IAM role ARN on the cloudWatchRoleArn property on your Account. You must set the cloudWatchRoleArn property separately for each AWS Region in which you want to enable CloudWatch Logs.
Source https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html and https://github.com/cloudposse/terraform-aws-api-gateway/pull/2

susie-h avatar
susie-h

Thanks RB for taking a look at this. On the same api gw documentation you linked, right below the section you quoted, it says:
1. Choose Settings from the primary navigation panel and enter an ARN of an IAM role with appropriate permissions in CloudWatch log role ARN. You need to do this once.
This is what I’ve shown in the screenshot above, which is where you need to provide the api gw with the role you’ve created. So if the sub-module creates the role which is used by the api-gw, how do i get the api gw to know that?

susie-h avatar
susie-h

I figured it out. Switching to the new aws console actually helped here. Turns out, Each time you look at “Settings” where the cloudwatch arn is asked for, that is a general api gw settings page for all api gw’s for that region….. idk how I missed that so far. I guess bc in the old UI i would click on a specific gw before seeing the Settings, so i didn’t know it was an “all gateways” setting.

2023-10-09

2023-10-10

venkata.mutyala avatar
venkata.mutyala

Is there a hard limit to how many AWS accounts you can have in an organization or can you just keep asking for them via service quota requests?

Michael avatar
Michael

I’ve seen up to 800 accounts per organization across dev, prod, and stage for 2,400 in total. Other than that, not sure!

1
Darren Cunningham avatar
Darren Cunningham

just curious, were they happy with that account structure?

2023-10-11

Dexter Cariño avatar
Dexter Cariño

Hi fellaz, a question regarding aurora mysql, about the auto scaling of it I test it and I want to be enlighten about the endpoints, what part of the documentation support that only the cluster endpoint should use? on the autoscaling, the aws handles the distribution of that in just 1 read endpoint?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

Dexter Cariño avatar
Dexter Cariño

Hello, just an update because im still confuse on that.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)
Amazon Aurora connection management - Amazon Aurora

Amazon Aurora typically involves a cluster of DB instances instead of a single instance. Each connection is handled by a specific DB instance. When you connect to an Aurora cluster, the host name and port that you specify point to an intermediate handler called an

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

For reader endpoints in particular:

Each Aurora cluster has a single built-in reader endpoint, whose name and other attributes are managed by Aurora. You can’t create, delete, or modify this kind of endpoint.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Endpoints.html#Aurora.Endpoints.Reader

Amazon Aurora connection management - Amazon Aurora

Amazon Aurora typically involves a cluster of DB instances instead of a single instance. Each connection is handled by a specific DB instance. When you connect to an Aurora cluster, the host name and port that you specify point to an intermediate handler called an

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)


on the autoscaling, the aws handles the distribution of that in just 1 read endpoint?
Yes exactly. Aurora will handle this for us

Dexter Cariño avatar
Dexter Cariño

thank you so much @Dan Miller (Cloud Posse)

np1
jose.amengual avatar
jose.amengual

Is it true that when control tower is enabled, aws activates a throttling on certain APIs that could affect terraform runs?

1
loren avatar

Curious about that. Got a reference that’s causing the concern?

jose.amengual avatar
jose.amengual

no, this was a comment made by someone that made me thing about it

jose.amengual avatar
jose.amengual

I do not have facts or error messages, so I hope is just gossip

1
loren avatar

Fwiw, I’ve taken to adding envs to my shell to improve retry response of AWS sdks to throttling events … https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-retries.html#cli-usage-retries-configure

AWS CLI retries - AWS Command Line Interface

Customize retries for failed AWS Command Line Interface (AWS CLI) API calls.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Erik Osterman (Cloud Posse)

Alex Jurkiewicz avatar
Alex Jurkiewicz

why would they add rate limiting only in one circumstance? Sounds like hearsay

jose.amengual avatar
jose.amengual

my guess could be that control tower uses a lot of step functions that could hit limits if many accounts are being provisioned at the same time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I doubt that there are any explicit rate limits that are adjusted for specific APIs, but rather that it’s a) hitting general account limits as a result of a lot of things happening around that time as you mentioned b) that new accounts could have lower rate limits until they are “aged” (but that’s speculation) to avoid platform-level DoS attacks. We’ve hit account-level API limits back-in-the-day when kubeiam was the norm, and you’d have every node of your kubernetes cluster slamming AWS APIs every 30-60 seconds.

1
loren avatar

yeah, i have a little parallel codebuild runner script that also streams back the cloudwatch logs… when it launches a couple hundred builds at once, the codebuild and cloudwatch consoles get rate limited and don’t handle retries correctly

loren avatar

i figure people see rate limiting, and then attribute it to some intentional action/change/restriction in aws, when it’s just how it works…

jose.amengual avatar
jose.amengual

ok, so this is all gossip then

jose.amengual avatar
jose.amengual

I mean, not what you guys have said

1

2023-10-12

2023-10-13

Muhammad Taqi avatar
Muhammad Taqi

Hello guys, I’m using Terraform to Create and S3 Private Bucket with IAM user & Keys to access to this Bucket. Here is my Terraform code.

module "s3_bucket" {
  source  = "cloudposse/s3-bucket/aws"
  version = "4.0.0"
  name    = local.bucket_name

  acl                        = "private"
  enabled                 = true
  user_enabled         = true
  force_destroy         = true
  versioning_enabled = true
  sse_algorithm         = "AES256"

  block_public_acls                     = true
  allow_encrypted_uploads_only  = true
  allow_ssl_requests_only           = true
  block_public_policy                  = true
  ignore_public_acls                   = true
  cors_configuration                   = [
    {
      allowed_origins     = ["*"]
      allowed_methods  = ["GET", "PUT", "POST", "HEAD", "DELETE"]
      allowed_headers   = ["Authorization"]
      expose_headers    = []
      max_age_seconds = "3000"
    }
  ]
  allowed_bucket_actions        = ["s3:*"]
  lifecycle_configuration_rules  =  []
}

resource "aws_secretsmanager_secret" "s3_private_bucket_secret" {
  depends_on              = [module.s3_private_bucket]
  name                       = join("", [local.bucket_name, "-", "secret"])
  recovery_window_in_days = 0
}

resource "aws_secretsmanager_secret_version" "s3_private_bucket_secret_credentials" {
  depends_on    = [module.s3_private_bucket]
  secret_id         = aws_secretsmanager_secret.s3_private_bucket_secret.id
  secret_string   = jsonencode({
    KEY    = module.s3_private_bucket.access_key_id
    SECRET = module.s3_private_bucket.secret_access_key
    REGION = module.s3_private_bucket.bucket_region
    BUCKET = module.s3_private_bucket.bucket_id
  })
}

After running above code, i can a new user has been created in IAM with name x-rc-bucket with access key and secret same as stored in Secret manager and has policy attached as

json
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Action": "s3:*",
			"Effect": "Allow",
			"Resource": [
				"arn:aws:s3:::x-rc-bucket/*",
				"arn:aws:s3:::x-rc-bucket"
			]
		}
	]
}

Then i’v a simple python scripts which try to upload a file to s3 bucket using keys from above secret manager;

import os
import boto3

image = "x.jpg"
s3_filestore_path = "images/x.jpg"
filename, file_extension = os.path.splitext(image)
content_type_dict = {
    ".png": "image/png",
    ".html": "text/html",
    ".css": "text/css",
    ".js": "application/javascript",
    ".jpg": "image/png",
    ".gif": "image/gif",
    ".jpeg": "image/jpeg",
}
content_type = content_type_dict[file_extension]
s3 = boto3.client(
    "s3",
    config=boto3.session.Config(signature_version="s3v4"),
    region_name="eu-west-3",
    aws_access_key_id="**",
    aws_secret_access_key="**",
)
s3.put_object(
    Body=image, Bucket="x-rc-bucket", Key=s3_filestore_path, ContentType=content_type
)

It throws an error botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.

What I’m looking is that every bucket should have it’s own keys and could be accesable to that specific keys only.

Muhammad Taqi avatar
Muhammad Taqi

can some here help me to identify the issue.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

looks like you have allow_encrypted_uploads_only = true set, which is likely denying your PutObject request

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Muhammad Taqi

jose.amengual avatar
jose.amengual

Another gossip: is it true AWS might deprecate Beanstalk in the future?

jose.amengual avatar
jose.amengual

@Vlad Ionescu (he/him) do you happen to know anything about this?

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Due to me being under a hilarious number of NDAs and knowing way more than I should, I have a rule to not comment on rumors or gossip Sorry!

That said, I would put new workloads on ECS on Fargate or App Runner rather than Beanstalk. I would not stress to migrate off Beanstalk (AWS still maintains and offers SimpleDB to customers, you know) but I would not build a whole new company/platform on top of it

1
4
tommy avatar

ECS is also not updated for more then 1 year, is it so stable that it doesn’t need any update or EKS is what they are focusing on?

Shaun Wang avatar
Shaun Wang

EKS is open source, AWS has to upgrade it to be conformant with the open source version

1

2023-10-15

an.rahulreddy avatar
an.rahulreddy

Hello everyone,

I write technical related articles. Here is my article related to Foundation Models on AWS Bedrock. Please read my article and I am open for feedback.

I am planning more articles on AWS bedrock series.

https://medium.com/@techcontentspecialist/unlock-the-power-of-ai-foundation-models-with-amazon-bedrock-4937d30bc925

Unlock the Power of AI Foundation Models with Amazon Bedrockattachment image

This is a newly added AWS service that helps you to build and scale Generative AI Applications with Foundation Models.

2023-10-17

Wojciech Pietrzak avatar
Wojciech Pietrzak

Hello,

My team was tasked with getting a security audit of our cloud infrastructure setup.

If you had to choose which company you would let make the audit, which criteria you would choose based on?

Would somebody share his experience about this topic?

Stoor avatar

My suggestions would be to run a tool like Prowler (https://github.com/prowler-cloud/prowler) before you consider getting a third party. Resolve the low hanging fruits. Then you can start thinking about the next parts.

prowler-cloud/prowler

Prowler is an Open Source Security tool for AWS, Azure and GCP to perform Cloud Security best practices assessments, audits, incident response, compliance, continuous monitoring, hardening and forensics readiness. Includes CIS, NIST 800, NIST CSF, CISA, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, Well-Architected Security, ENS and more.

Wojciech Pietrzak avatar
Wojciech Pietrzak

we ran several tools like those

1
msa avatar

Where is the requirement coming from? SOC2 because customers require? Regulatory (PCI)? Internal want?

For audits with a certificate, you will want to call around and select someone. If you use controls automation (Drata, Vanta) they will list auditors who should be able to integrate with the service and save you time and headache.

If internal and just needs to get done, find a cheap provider (call 5, select one based on price). If internal and you want to learn more, find smaller provider knowledgeable about your platform/stack. Call a few up, good ones will be insightful.

Pricing tends to go up with brand, technically excellent teams who are not called (Mandiant) tend not to cost more than sucky ones.

A safe starting point in Europe used to be NCC, not sure if still true.

Wojciech Pietrzak avatar
Wojciech Pietrzak

The requirement comes from ourself.

We did our best, scanned with all what we hat at hand, fixed the findings (some of them are whitelisted/ignored because they would impact our daily business and we don’t see them as critical). And now the company would like to see (and our team as well) if we didn’t overlooked something.

We don’t care about any certificates.

Shaun Wang avatar
Shaun Wang

Guidepoint Security, check their customer references

msa avatar

I had great luck with a boutique local small consultancy, the costs are half of “pentest providers” and I could work with them over a few years and assessments. We drilled down into various areas of the stack and practices - cloud, CI/CD, auth/z, etc. They did threat models, hw security. I found them trough a recommendation, asking peers about a flexible and knowledgeable consultant with low staff turnover. I went through a host of brand names, but they priced themselves out, or had a template assessment that could be executed by a fresh graduate - not very useful when I have a question about interaction of two authz systems.

Offensive Cyber Security Services | Fracture Labs

Proactively protect your systems & reduce risk with our offensive security experts. Click or contact us to learn about our specialized security testing services.

rohit avatar

Trying to understand, but has anyone deployed helm charts (a bunch of them) within a cloudformation stack? how does that work? does it wait for a EKS cluster to be spun up before deploying helm charts sequentially?

Shaun Wang avatar
Shaun Wang
Using AWS CloudFormation to deploy software into Amazon EKS clusters | Amazon Web Servicesattachment image

Learn how you can use AWS CloudFormation to model and deploy Kubernetes resource manifests or Helm charts into an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.

2023-10-18

2023-10-20

Sairam Madichetty avatar
Sairam Madichetty

Hi guys, Is this design possible? For cross accounts: Infra account - codecommit,codepipeline, s3, kms. Dev account - codeDeploy Prod account - codeDeploy

In codepipeline, Can we give source as codecommit of same account and target as codeDeploy of another account?

1
Alex Atkinson avatar
Alex Atkinson

It can do anything with any other account that you can permit via IAM. https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html

Create a pipeline in CodePipeline that uses resources from another AWS account - AWS CodePipeline

Describes how to create a pipeline in CodePipeline that uses resources from another AWS account.

Alex Atkinson avatar
Alex Atkinson
Deploy an application in a different AWS account - AWS CodeDeploy

Learn how to initiate deployments in another of your organization’s accounts by using an IAM role that provides cross-account access.

Sairam Madichetty avatar
Sairam Madichetty

Thank you @Alex Atkinson I will go through them today.

Sairam Madichetty avatar
Sairam Madichetty

I see this in the docs https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-cross-account.html

“Although you might perform related work in different accounts, CodeDeploy deployment groups and the Amazon EC2 instances to which they deploy are strictly tied to the accounts under which they were created. You cannot, for example, add an instance that you launched in one account to a deployment group in another.”

this is where I’m confused if it is possible.

Deploy an application in a different AWS account - AWS CodeDeploy

Learn how to initiate deployments in another of your organization’s accounts by using an IAM role that provides cross-account access.

Alex Atkinson avatar
Alex Atkinson

No, you wouldn’t be able to to do that. The workflow looks like this:

  1. Setup deployments in a bunch of accounts
  2. Create cross account roles from some accounts to a central one that grant access to codedeploy
  3. Assume the role of those other accounts from the central one to initiate a deploy All the interactions of the deploy must happen within the accounts that codedeploy lives in, but you can trigger those deployments remotely.
Alex Atkinson avatar
Alex Atkinson

If you’re doing an env promotion scheme such a dev > qa > stag> prod, your application revision and other assets can have a workflow like this:

  1. Build happens - upload assets (containers, zip files, etc.) to wherever they go, and upload new application revision to s3.
  2. Deploy to dev - Copy the assets and app revision from the build acct into the dev s3 bucket && kick off code deploy.
  3. Deploy to qa - Copy the assets and app revision from the previous env into the qa s3 bucket && kick off code deploy.
  4. etc. By the time you get to PROD, so long as you’re promoting assets, there’s no chance someone can make a new build and immediately kick it into PROD without a thick paper trail unless they hop the tooling.
1
Sairam Madichetty avatar
Sairam Madichetty

Thank you so much. It worked out (keeping the artifacts in s3 and kicking off from there).

1

2023-10-22

jonjitsu avatar
jonjitsu

Any opinions on resource naming conventions where they put the resource type in the name? ex: https://cloud.google.com/architecture/security-foundations/using-example-terraform#naming_conventions I’m not sure of the logic of doing that besides perhaps when looking at logs the name already has the type in it.

2023-10-23

2023-10-24

jose.amengual avatar
jose.amengual

I’m about to create my Org in AWS using TF, the module is ready, and all and I will be using the advanced organization structure, but I was wondering about my state. In which account should I put it? The management account?

jose.amengual avatar
jose.amengual

I’m a bit reluctant to create resources there

jose.amengual avatar
jose.amengual

how do you guys do it @CloudPosse @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we usually create TF state in the root account since it’s what you have in the beginning, and even to provision all other acounts with TF, you need TF state

1
loren avatar

alternatively, you could bootstrap a second account using cli/console, then use import blocks to manage it the same as everything else

jose.amengual avatar
jose.amengual

interesting

loren avatar

you can of course move the state pretty easily also. bootstrap in the management account using tf, create the new account, then move the state to the new account

jose.amengual avatar
jose.amengual

yes, that is not hard to do

RB avatar

can you delegate organizations to a non-root-org account ? TIL

jose.amengual avatar
jose.amengual

yes, I was thinking on using something like management account to do all the Org stuff

jose.amengual avatar
jose.amengual

ahhh but that article is to delegate management of an OU to an account

jose.amengual avatar
jose.amengual

you still need the account that actually manages the OUs to delegate that

loren avatar

delegation of organizations is limited to read-only actions, and to policy-management… can’t do anything like CreateAccount, for example, from the delegated account…

1
jose.amengual avatar
jose.amengual

So without the root account an Org can’t be created and child accounts can’t be created/invited

jose.amengual avatar
jose.amengual

I guess where the docs makes this confusing is when they call root account the management account

loren avatar

indeed! root account is now === management account. and also sometimes there is a management account that is not the root account

jose.amengual avatar
jose.amengual

So you have a root account, and then you create root OU and invite the management account to the root OU and then in the management account you create another OU structure and invite/create all the accounts there?

RB avatar

i dont think you can create another ou structure or invite/create accounts in a non-root account

RB avatar

it sounds like youll have to do it all in the management/root account

RB avatar

except for a very small subset of perms

loren avatar

there’s only one org, and only one ou structure. create one standalone account, setup billing however you like (credit card or invoicing), and create/enable the org in that account. that is now the root/management account. create new accounts from org api using the root/management account

1
1
loren avatar

all CreateAccount actions must use a provider that points at the root/management account. where your backend points can be a different account (once it exists)

jose.amengual avatar
jose.amengual

yes

TechHippie avatar
TechHippie

Hello- i am a EKS novice so forgive if my question is pretty basic. I am creating Terraform code to create EKS cluster, node group. In addition I also want to create 3 cluster roles ( deployer, administrator and developer) mapping it to IAM roles. Can anyone help me with how I can create the roles and configure the role mapping to IAM roles/users.

RB avatar

See this example

https://github.com/cloudposse/terraform-aws-eks-cluster/blob/3.0.0/examples/complete/main.tf

The eks module is using a source as a relative path but you can replace that with the tf registry source from the readme.

That will get you the eks and eks node group.

Then read over the readme for the eks module and look at these map_ inputs.

https://github.com/cloudposse/terraform-aws-eks-cluster/tree/3.0.0#input_map_additional_aws_accounts

provider "aws" {
  region = var.region
}

module "label" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  attributes = ["cluster"]

  context = module.this.context
}

locals {
  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/>
  # <https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/deploy/subnet_discovery.md>
  tags = { "kubernetes.io/cluster/${module.label.id}" = "shared" }

  # required tags to make ALB ingress work <https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html>
  public_subnets_additional_tags = {
    "kubernetes.io/role/elb" : 1
  }
  private_subnets_additional_tags = {
    "kubernetes.io/role/internal-elb" : 1
  }
}

module "vpc" {
  source  = "cloudposse/vpc/aws"
  version = "2.1.0"

  ipv4_primary_cidr_block = "172.16.0.0/16"
  tags                    = local.tags

  context = module.this.context
}

module "subnets" {
  source  = "cloudposse/dynamic-subnets/aws"
  version = "2.3.0"

  availability_zones              = var.availability_zones
  vpc_id                          = module.vpc.vpc_id
  igw_id                          = [module.vpc.igw_id]
  ipv4_cidr_block                 = [module.vpc.vpc_cidr_block]
  max_nats                        = 1
  nat_gateway_enabled             = true
  nat_instance_enabled            = false
  tags                            = local.tags
  public_subnets_additional_tags  = local.public_subnets_additional_tags
  private_subnets_additional_tags = local.private_subnets_additional_tags

  context = module.this.context
}

module "eks_cluster" {
  source = "../../"

  vpc_id                       = module.vpc.vpc_id
  subnet_ids                   = concat(module.subnets.private_subnet_ids, module.subnets.public_subnet_ids)
  kubernetes_version           = var.kubernetes_version
  local_exec_interpreter       = var.local_exec_interpreter
  oidc_provider_enabled        = var.oidc_provider_enabled
  enabled_cluster_log_types    = var.enabled_cluster_log_types
  cluster_log_retention_period = var.cluster_log_retention_period

  cluster_encryption_config_enabled                         = var.cluster_encryption_config_enabled
  cluster_encryption_config_kms_key_id                      = var.cluster_encryption_config_kms_key_id
  cluster_encryption_config_kms_key_enable_key_rotation     = var.cluster_encryption_config_kms_key_enable_key_rotation
  cluster_encryption_config_kms_key_deletion_window_in_days = var.cluster_encryption_config_kms_key_deletion_window_in_days
  cluster_encryption_config_kms_key_policy                  = var.cluster_encryption_config_kms_key_policy
  cluster_encryption_config_resources                       = var.cluster_encryption_config_resources

  addons            = var.addons
  addons_depends_on = [module.eks_node_group]

  # We need to create a new Security Group only if the EKS cluster is used with unmanaged worker nodes.
  # EKS creates a managed Security Group for the cluster automatically, places the control plane and managed nodes into the security group,
  # and allows all communications between the control plane and the managed worker nodes
  # (EKS applies it to ENIs that are attached to EKS Control Plane master nodes and to any managed workloads).
  # If only Managed Node Groups are used, we don't need to create a separate Security Group;
  # otherwise we place the cluster in two SGs - one that is created by EKS, the other one that the module creates.
  # See <https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html> for more details.
  create_security_group = false

  # This is to test `allowed_security_group_ids` and `allowed_cidr_blocks`
  # In a real cluster, these should be some other (existing) Security Groups and CIDR blocks to allow access to the cluster
  allowed_security_group_ids = [module.vpc.vpc_default_security_group_id]
  allowed_cidr_blocks        = [module.vpc.vpc_cidr_block]

  # For manual testing. In particular, set `false` if local configuration/state
  # has a cluster but the cluster was deleted by nightly cleanup, in order for
  # `terraform destroy` to succeed.
  apply_config_map_aws_auth = var.apply_config_map_aws_auth

  context = module.this.context

  cluster_depends_on = [module.subnets]
}

module "eks_node_group" {
  source  = "cloudposse/eks-node-group/aws"
  version = "2.4.0"

  subnet_ids        = module.subnets.private_subnet_ids
  cluster_name      = module.eks_cluster.eks_cluster_id
  instance_types    = var.instance_types
  desired_size      = var.desired_size
  min_size          = var.min_size
  max_size          = var.max_size
  kubernetes_labels = var.kubernetes_labels

  # Prevent the node groups from being created before the Kubernetes aws-auth ConfigMap
  module_depends_on = module.eks_cluster.kubernetes_config_map_id

  context = module.this.context
}

2023-10-25

ghostface avatar
ghostface

can an internal api gateway be reached from a VPC in another account?

rohit avatar

I am assuming you mean AWS API Gateway, and it’s implemented as a private vs regional? Another account can create a VPC endpoint for API Gateway to access one in another account.

  1. Account A - API Gateway is setup
  2. Account B - Create VPC endpoint to Account A’s API Gateway
  3. Account B - Make sure endpoint policy allows invoke
  4. Account B - Make requests to API Gateway.
jose.amengual avatar
jose.amengual

and then you can have API gateway policies to allow certain VPC IDs to connect if you want

jimp avatar

Finally, AWS ECR has announced support for remote caching using buildkit.

Here’s an example from the announcement:

docker build -t amazonaws.com/buildkit-test:image \
--cache-to mode=max,image-manifest=true,oci-mediatypes=true,type=registry,ref=amazonaws.com/buildkit-test:cache \
--cache-from type=registry,ref=amazonaws.com/buildkit-test:cache .

docker push amazonaws.com/buildkit-test:image

The feature was introduced in buildkit v0.12. The key syntax is image-manifest=true,oci-mediatypes=true

May your builds be speedy and true!

Announcing remote cache support in Amazon ECR for BuildKit clients | Amazon Web Servicesattachment image

This feature will be pre-installed and supported by Docker when version 25.0 is released. This feature is already released in Buildkit versions of 12.0 or later and is available now on Finch versions 0.8 or later. Introduction Amazon Elastic Container Registry (Amazon ECR) is a fully managed container registry that customers use to store, share, […]

7
1
Mannan Bhuiyan avatar
Mannan Bhuiyan

• Hello Guys, Can any one help me to find out a solution how to monitor Disk usage of an ecs cluster and set cloudwatch alarm or create an alert when the disk is 70GB or 70% gets full

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Max Lobur (Cloud Posse)

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)
resource "aws_cloudwatch_metric_alarm" "cpu_utilization_high" {
  count               = module.this.enabled ? 1 : 0
  alarm_name          = module.cpu_utilization_high_alarm_label.id
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = var.cpu_utilization_high_evaluation_periods
  metric_name         = "CPUUtilization"
  namespace           = "AWS/ECS"
  period              = var.cpu_utilization_high_period
  statistic           = "Average"
  threshold           = local.thresholds["CPUUtilizationHighThreshold"]

  alarm_description = format(
    var.alarm_description,
    "CPU",
    "High",
    var.cpu_utilization_high_period / 60,
    var.cpu_utilization_high_evaluation_periods
  )

  alarm_actions = compact(var.cpu_utilization_high_alarm_actions)
  ok_actions    = compact(var.cpu_utilization_high_ok_actions)

  dimensions = local.dimensions_map[var.service_name == "" ? "cluster" : "service"]
}
Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

Not sure there’s even a cloudwatch metric for that. Just briefly searched and couldn’t find one

2023-10-30

Benedikt Dollinger (XALT) avatar
Benedikt Dollinger (XALT)

Join Us for a Platform Engineering Webinar!

Hey everyone!

We’re excited to invite you to our upcoming webinar on Platform Engineering, featuring insights from one of our valued customers. This session will guide you through the process of creating AWS Accounts swiftly using Jira Service Management and a Developer Self-Service, empowering you to unleash the full potential of your AWS Cloud Infrastructure.

Date: Friday, 17th November 2023 Time: 10:00 AM CET Location: Live & Online

What you will learn:

Set up AWS Infrastructure in minutes with JSM Cloud & Developer Self-Service Navigate our seamless account creation process within JSM Experience the efficiency of approvals for a streamlined workflow Explore the comprehensive account catalog in Asset Management Leverage AWS & JSM for enhanced cost efficiency, speed, security, and compliance through our developer self-service Don’t miss this opportunity to supercharge your AWS Cloud Infrastructure deployment!

Save your spot: Platform Engineering Webinar Registration

See you there! wave TEAM XALT

Platform Engineering Webinar | Set Up AWS Infrastructure in Minutes with JSMattachment image

Learn how to create an AWS account with Jira Service Management and streamline your workflow. Join our free webinar for IT leaders, cloud experts, and product owners on November 17th. Save your spot now!

TechHippie avatar
TechHippie

Hello Team -is anyone aware of any terraform code or module that could create a AWS LB or nginx LB based on user input? Any guidance with one or the other will be really helpful.

    keyboard_arrow_up