#aws (2022-03)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2022-03-02

Bart Coddens avatar
Bart Coddens

The Trusted advisor supports a organizatiol view. Can you create such a report via the cli ?

sohaibahmed98 avatar
sohaibahmed98
Keptn - Cloud-native application life-cycle orchestration.attachment image

Keptn automates observability, SLO-driven multi-stage delivery, and operations

2022-03-03

Shreyank Sharma avatar
Shreyank Sharma

We are running a web application written in Java(tomcat8) hosted in AWS ElastcBeanStalk

Some weeks back we started getting 503 error randomly

When we checked the elasticbeanstalk-erorr_logs

[Thu Mar 03 13:22:12.906144 2022] [proxy:error] [pid 14882:tid 139757338711808] (13)Permission denied: AH02454: HTTP: attempt to connect to Unix domain socket /var/run/httpd/ (localhost) failed
[Thu Mar 03 13:22:12.906202 2022] [proxy_http:error] [pid 14882:tid 139757338711808] [client 172.31.17.0:61382] AH01114: HTTP: failed to make connection to backend: httpd-UDS, referer: <http://our-domain.com/1/callBackLog.jsp>

The error logs are suggesting connection error with backend unix socket When we checked in /var/run/httpd/ folder, there were no unix sockets(.sock files)

But in apache httpd config

<VirtualHost *:80>
  <Proxy *>
    Require all granted
  </Proxy>
  ProxyPass / <http://localhost:8080/> retry=0
  ProxyPassReverse / <http://localhost:8080/>
  ProxyPreserveHost on

  ErrorLog /var/log/httpd/elasticbeanstalk-error_log
</VirtualHost>

the proxy backend is ip address not unix socket

As per the config httpd should connect to backend ip address(localhost:8080) but why is it complaining about unix socket

Have anyone faced similar issues?

2022-03-05

Antarr Byrd avatar
Antarr Byrd

Anyone familiar with AWS Transfer? I’m trying to upload a file using SFTP but I’m getting a permissions denied error

sftp> put desktop.ini
Uploading desktop.ini to /reports/desktop.ini
remote open("/reports/desktop.ini"): Permission denied
sftp> 
managedkaos avatar
managedkaos

Its been a while since i used transfer but i would say check the filesystem or bucket that you are using on the backend. in either case, make sure either the location is created and/or the user has permission to write. if i can recall the steps i will update.

Baronne Mouton avatar
Baronne Mouton

same, been a while - I recall needing to set scope down policy .. details here: https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html

Managing access controls - AWS Transfer Family

Create an IAM policy in AWS Transfer Family to allow your users to access your Amazon S3 bucket.

Antarr Byrd avatar
Antarr Byrd

The problem was that Policy attached to the user. Needed to remove it and attach a role with the correct permissions

1
Baronne Mouton avatar
Baronne Mouton

2022-03-07

Nishant Thorat avatar
Nishant Thorat

When it comes to cloud data leak protection all eyes turn to public S3 buckets or public EC2 servers. But even though your EC2 instance is not exposed the data may still leak. EBS volumes are in plenty and hence should be continuously assessed for risk.

Two ways EBS volumes can leak data: (Unintended) Public EBS volumes snapshots and Unencrypted EBS volumes/snapshots. https://www.cloudyali.io/blogs/finding-unencrypted-aws-ebs-volumes-at-scale

Instantly find all unencrypted EBS volumes from all AWS accounts and regions.

Encryption protects data stored on volumes, disk I/O, and the snapshots created from a volume to protect your sensitive data from exploits & unauthorized users

chouhanshreya17 avatar
chouhanshreya17

Hi anyone can help me with this problem There are 50-60 accounts under AWS organisation. There is a central account which manages all. I want if any of the backup fails in any account to be notified in mail.

In short the solution. Shouldn’t be implemented in all the account. Just one account so it manages all.

prashanttiwari1337 avatar
prashanttiwari1337

try to look at centralised logging and create a cloudwatch alarm when backup fails message is seen in logs

2022-03-08

Balazs Varga avatar
Balazs Varga

Facing with the following issue on aurora serverless Aborted connection 148 to db: 'db' user: 'user' host: 'rds_ip' (Got an error reading communication packets) Any advice ?

2022-03-09

Baronne Mouton avatar
Baronne Mouton

hi, can I request a feature for the cloudposse/terraform-aws-backup module? It doesn’t appear to have the ability to enable or set VSS (volume shadow copy service). In the aws_backup_plan resource it is the following addition required: advanced_backup_setting { backup_options = { WindowsVSS = “enabled” } resource_type = “EC2” }

Wédney Yuri avatar
Wédney Yuri

There is the issue tracker to report any bugs or file feature requests.

RB avatar

Describe the Feature

Module doesn’t allow the Windows VSS setting to be enabled (disabled by default)

Expected Behavior

A variable should be configured and the aws_backup_plan resource modified to add this configuration:

  advanced_backup_setting {
    backup_options = {
      WindowsVSS = "enabled"
    }
    resource_type = "EC2"
  }

Use Case

Used for Windows users who wish to create application consistent backups

Describe Ideal Solution

Ability to enable/disable the feature if required

Alternatives Considered

built a non-modularised version of this using the various aws provider resources

Baronne Mouton avatar
Baronne Mouton

no problem

mikesew avatar
mikesew

Any JQ / AWS CLI junkies here? I’m trying to get the SubnetId and Name (from the tag) from vpc subnets matching a string. I’m having a problem getting those things out of arrays and into a simpler format I can parse

aws ec2 describe-subnets --filters "Name=tag:Name,Values=*private1*" \
--query 'Subnets[*].[ SubnetId, Tags[?Key==`Name`].Value ]'
[
    [
        "subnet-00c332dca528235fe",
        [
            "my-vpc-private1-us-east-1a-subnet"
        ]
    ]
]
mikesew avatar
mikesew

here’s how I ended up resolving.

aws ec2 describe-subnets --filters "Name=tag:Name,Values=*private*" \
   | jq -r '.Subnets[] | (.Tags[]//[]|select(.Key=="Name")|.Value) as $name | "\(.SubnetId) \($name)" '
subnet-0ceb34435b2c1d11 vpc-private1-us-east-1a
subnet-0b233052g4876c07 vpc-private2-us-east-1b
1
managedkaos avatar
managedkaos

Sorry for the delayed reply but you can do all of the filtering in the CLI call so you don’t need to pipe over to jq:

 aws ec2 describe-subnets --query='Subnets[].{SubnetId:SubnetId, Name:Tags[?Key==`Name`].Value[] | [0]}' --output=text
1
mikesew avatar
mikesew

So with that query clause, I believe that it did not support wildcards. UPDATE: I tried combining your query clause with my above filter clause and you’re right, I got the same result without having to resort to parsing with JQ.

aws ec2 describe-subnets  \
--filters "Name=tag:Name,Values=*private*" \
--query='Subnets[].{SubnetId:SubnetId, Name:Tags[?Key==`Name`].Value[] | [0]}' --output=text

2022-03-10

Matt Gowie avatar
Matt Gowie

Leapp just released v0.10.0 — https://www.leapp.cloud/releases

Super excited about this release as it enables logging into the AWS console from Leapp, which was my major feature request before being able to fully switch away from aws-vault. If any of ya’ll are using aws-vault then be sure to check out Leapp — It’s a vastly better tool.

Leapp - Releases
Download LeappManage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.

Is your feature request related to a problem? Please describe.
Using aws-vault, I can utilize the aws-vault login command which opens a new browser tab and logs me into the AWS Console using the selected profile or role.

Describe the solution you’d like
I’d like to be able to do the same with Leapp as this is a major time saver when switching between many accounts.

Describe alternatives you’ve considered
None.

Additional context
N/A

1
loren avatar

the terminology in leapp sure takes some getting used to, when coming from directly managing awscli profiles

Leapp - Releases
Download LeappManage your Cloud credentials locally and improve your workflow with the only open-source desktop app you’ll ever need.

Is your feature request related to a problem? Please describe.
Using aws-vault, I can utilize the aws-vault login command which opens a new browser tab and logs me into the AWS Console using the selected profile or role.

Describe the solution you’d like
I’d like to be able to do the same with Leapp as this is a major time saver when switching between many accounts.

Describe alternatives you’ve considered
None.

Additional context
N/A

Matt Gowie avatar
Matt Gowie

Yeah — Agreed.

loren avatar

though, i can’t really tell whether it’s awscli that is inconsistent with aws terminology, or if it is leapp

Nicolò Marchesi avatar
Nicolò Marchesi

Just you wait for the concurrent multi console extension coming in a few weeks @Matt Gowie

1
managedkaos avatar
managedkaos

Thanks for sharing. Just tried it out. Nice look and feel. Thinking it would be cool if there was a way to “import” existing profiles. Getting a backup of the existing credential file was nice though.

Question, is there a way to trigger leap from the CLI? My workflow is to create an AWS profile and then tie it to a python virtualenvironment. when i start the environment, it hooks into the AWS credentials and sets the environment for the profile associated with the environment.

With leapp, it looks like I would have to open the app and start a session before activating my python environment. If i could pull leapp into my current worfklow (without having to go to a GUI), it would be awesome.

Also, for folks that switch contexts often, is there a way to activate multiple profiles at the same time? I could, of course, go into the UI and start multiple sessions, but would be cool if there was a way to link sessions so they are activated at the same time; for example, i need to sessions in dev and prod accounts at the same time.

1
Nicolò Marchesi avatar
Nicolò Marchesi

CLI is coming in a week or two and we’re evaluating the importing process to make onboarding smoother

2
managedkaos avatar
managedkaos

Bravo

managedkaos avatar
managedkaos

yeah it was kind of shocking to not see my existing profiles

Nicolò Marchesi avatar
Nicolò Marchesi

Yeah I know better make it clear to people that Leapp is going to clean the profiles

1
Matt Gowie avatar
Matt Gowie

@Nicolò Marchesi first thing I tried when I got the update was starting session #1 and then going to start session #2 to see if the multi console sessions stuff that you shared was included in this update. It didn’t work of course, but I am excited to test out that upcoming functionality!

Matt Gowie avatar
Matt Gowie

@managedkaos those were some good feedback items — I’d like to see that stuff too! Nicolo + team have been super responsive and active in GH, so I’m sure adding an issue there or checking out the issues they have open would be a good call.

1
managedkaos avatar
managedkaos

indeed. i will copy-pasta this over to GH!

Nicolò Marchesi avatar
Nicolò Marchesi

Thanks guys! also you can take a look at https://roadmap.leapp.cloud for a high-level overview

1
1
Patrice Lachance avatar
Patrice Lachance
Multi-Console Browser Extension - Leapp - Docs

Leapp is a tool for developers to manage, secure, and access the cloud. Manage AWS and Azure credentials centrally

Just you wait for the concurrent multi console extension coming in a few weeks @Matt Gowie

Neil Johari avatar
Neil Johari

Hey team! Just trying to design something and wanted to know if my understanding is correct from one of you legends

  1. In an AppMesh, does it make sense to always have every Virtual Service associated with a Virtual Router and Virtual Node?
  2. Is the way you allow communication between two Virtual Nodes via the “backends”? Why does 2-way traffic work if backends are meant to be egress traffic only?
  3. Can I have a route in my Virtual Gateway direct traffic to Cloudfront? I wanted to have a Virtual Node who’s service discovery was DNS hostname = Cloudfront address
Jim Park avatar
Jim Park
  1. Virtual Service can apparently be directed to Virtual Node directly, but when I last implemented it, I used a Virtual Router with a single Virtual Node target.
  2. I haven’t used backends before, but I believe you only need to define a backend if you’re going to be using the service mesh to provide discovery and routing to that backend. That’s why backends are all Virtual Services.
  3. I suspect no, but I’m curious to hear if you get this to work.
1
Neil Johari avatar
Neil Johari

Awesome, thanks! I would use the SweetOps terraform module, but it has a lot of moving parts. So I figured struggling through it myself will probably be beneficial

(also doesn’t help that the module is outdated by a few years and uses deprecated features, like template_file in a data block)

2022-03-11

Balazs Varga avatar
Balazs Varga

if I have an delete mfa enabled bucket and I would like to do exclude a folder from this “role”? How can I do that ?

Jesus Martinez avatar
Jesus Martinez

in the policy you explicit deny that folder

Jesus Martinez avatar
Jesus Martinez
 "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::MyExampleBucket/folder1",
        "arn:aws:s3:::MyExampleBucket/folder1/*"
      ]
Jesus Martinez avatar
Jesus Martinez
Policy evaluation logic - AWS Identity and Access Management

Learn how JSON policies are evaluated within a single account to return either Allow or Deny. To learn how AWS evaluates policies for cross-account access, see .

2022-03-12

Neil Johari avatar
Neil Johari

Hey team, I’m going absolutely nuts… Has anyone here had success with App Mesh with envoy proxy? I can’t seem to get envoy to work correctly, despite all logs looking good

More details in

Neil Johari avatar
Neil Johari

My problem seems identical to this: https://docs.aws.amazon.com/app-mesh/latest/userguide/troubleshooting-setup.html#ts-setup-lb-mesh-endpoint-health-check

I deployed a Bastion and can hit my core app, but not the virtual gateway backed by the envoy proxy

It immediately refuses connections on port 80 (which is correctly mapped), and hangs on curl to 9901 (envoy admin interface port)

App Mesh setup troubleshooting - AWS App Mesh

This topic details common issues that you may experience with App Mesh setup.

Shaun Wang avatar
Shaun Wang

The virtual gw is a LB from what I remember, make sure your bastion host is in the same VPC and allow incoming traffic on the sg of the LB. Check for logs on envoy in cloudwatch or VPC flow log on the LB and work back from there

Neil Johari avatar
Neil Johari

Yep, VGW is what the LB is trying to connect to. Bastion is on same VPC, and can reach the main app ECS but can’t reach VGW.

Logs seem ok, I can see it responding to the ECS health checks and it looks good

[2022-03-13 05:48:44.554][14][debug][conn_handler] [source/server/active_tcp_listener.cc:140] [C321] new connection from 127.0.0.1:39462
...
':authority', '127.0.0.1:9901'
':path', '/stats?format=json'
...
[2022-03-13 05:48:44.554][14][debug][admin] [source/server/admin/admin_filter.cc:66] [C321][S2351538453251859789] request complete: path: /stats?format=json

So something with external connectivity? The ports seem bound correctly:

80:80/tcp, 9901:9901/tcp

Same thing on the sidecar (minus the port 80, since that’s for the app itself)

Shaun Wang avatar
Shaun Wang

When you try to connect from bastion host to the VGW, the connection source should be from a private IP not local 127.0.0.1

Neil Johari avatar
Neil Johari

You’re going to love this: it was a typo in my APPMESH_RESOURCE_ARN. For any future readers who stumble onto this… virtualGateway vs virtualNode will cause frustration

Darren Cunningham avatar
Darren Cunningham

if you haven’t already, please for the sanity of the next person report that to the document maintainer.

jose.amengual avatar
jose.amengual

another reason I hate AppMesh

Shaun Wang avatar
Shaun Wang

Wait where is the typo? I don’t see anything

2022-03-13

2022-03-14

Frank avatar

I was checking this example, and am wondering if it could even work? https://github.com/cloudposse/terraform-aws-transit-gateway/blob/master/examples/multi-account/main.tf

# Create the Transit Gateway, route table associations/propagations, and static TGW routes in the `network` account.
# Enable sharing the Transit Gateway with the Organization using Resource Access Manager (RAM).
# If you would like to share resources with your organization or organizational units,
# then you must use the AWS RAM console or CLI command to enable sharing with AWS Organizations.
# When you share resources within your organization,
# AWS RAM does not send invitations to principals. Principals in your organization get access to shared resources without exchanging invitations.
# <https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html>

module "transit_gateway" {
  source = "../../"

  ram_resource_share_enabled = true

  create_transit_gateway                                         = true
  create_transit_gateway_route_table                             = true
  create_transit_gateway_vpc_attachment                          = false
  create_transit_gateway_route_table_association_and_propagation = true

  config = {
    prod = {
      vpc_id                            = null
      vpc_cidr                          = null
      subnet_ids                        = null
      subnet_route_table_ids            = null
      route_to                          = null
      route_to_cidr_blocks              = null
      transit_gateway_vpc_attachment_id = module.transit_gateway_vpc_attachments_and_subnet_routes_prod.transit_gateway_vpc_attachment_ids["prod"]
      static_routes = [
        {
          blackhole              = true
          destination_cidr_block = "0.0.0.0/0"
        },
        {
          blackhole              = false
          destination_cidr_block = "172.16.1.0/24"
        }
      ]
    },
    staging = {
      vpc_id                            = null
      vpc_cidr                          = null
      subnet_ids                        = null
      subnet_route_table_ids            = null
      route_to                          = null
      route_to_cidr_blocks              = null
      transit_gateway_vpc_attachment_id = module.transit_gateway_vpc_attachments_and_subnet_routes_staging.transit_gateway_vpc_attachment_ids["staging"]
      static_routes = [
        {
          blackhole              = false
          destination_cidr_block = "172.32.1.0/24"
        }
      ]
    },
    dev = {
      vpc_id                            = null
      vpc_cidr                          = null
      subnet_ids                        = null
      subnet_route_table_ids            = null
      route_to                          = null
      route_to_cidr_blocks              = null
      static_routes                     = null
      transit_gateway_vpc_attachment_id = module.transit_gateway_vpc_attachments_and_subnet_routes_dev.transit_gateway_vpc_attachment_ids["dev"]
    }
  }

  context = module.this.context

  providers = {
    aws = aws.network
  }
}


# Create the Transit Gateway VPC attachments and subnets routes in the `prod`, `staging` and `dev` accounts

module "transit_gateway_vpc_attachments_and_subnet_routes_prod" {
  source = "../../"

  # `prod` account can access the Transit Gateway in the `network` account since we shared the Transit Gateway with the Organization using Resource Access Manager
  existing_transit_gateway_id             = module.transit_gateway.transit_gateway_id
  existing_transit_gateway_route_table_id = module.transit_gateway.transit_gateway_route_table_id

  create_transit_gateway                                         = false
  create_transit_gateway_route_table                             = false
  create_transit_gateway_vpc_attachment                          = true
  create_transit_gateway_route_table_association_and_propagation = false

  config = {
    prod = {
      vpc_id                 = module.vpc_prod.vpc_id
      vpc_cidr               = module.vpc_prod.vpc_cidr_block
      subnet_ids             = module.subnets_prod.private_subnet_ids
      subnet_route_table_ids = module.subnets_prod.private_route_table_ids
      route_to               = null
      route_to_cidr_blocks = [
        module.vpc_staging.vpc_cidr_block,
        module.vpc_dev.vpc_cidr_block
      ]
      static_routes                     = null
      transit_gateway_vpc_attachment_id = null
    }
  }

  context = module.this.context

  providers = {
    aws = aws.prod
  }
}

module "transit_gateway_vpc_attachments_and_subnet_routes_staging" {
  source = "../../"

  # `staging` account can access the Transit Gateway in the `network` account since we shared the Transit Gateway with the Organization using Resource Access Manager
  existing_transit_gateway_id             = module.transit_gateway.transit_gateway_id
  existing_transit_gateway_route_table_id = module.transit_gateway.transit_gateway_route_table_id

  create_transit_gateway                                         = false
  create_transit_gateway_route_table                             = false
  create_transit_gateway_vpc_attachment                          = true
  create_transit_gateway_route_table_association_and_propagation = false

  config = {
    staging = {
      vpc_id                 = module.vpc_staging.vpc_id
      vpc_cidr               = module.vpc_staging.vpc_cidr_block
      subnet_ids             = module.subnets_staging.private_subnet_ids
      subnet_route_table_ids = module.subnets_staging.private_route_table_ids
      route_to               = null
      route_to_cidr_blocks = [
        module.vpc_dev.vpc_cidr_block
      ]
      static_routes                     = null
      transit_gateway_vpc_attachment_id = null
    }
  }

  context = module.this.context

  providers = {
    aws = aws.staging
  }
}

module "transit_gateway_vpc_attachments_and_subnet_routes_dev" {
  source = "../../"

  # `dev` account can access the Transit Gateway in the `network` account since we shared the Transit Gateway with the Organization using Resource Access Manager
  existing_transit_gateway_id             = module.transit_gateway.transit_gateway_id
  existing_transit_gateway_route_table_id = module.transit_gateway.transit_gateway_route_table_id

  create_transit_gateway                                         = false
  create_transit_gateway_route_table                             = false
  create_transit_gateway_vpc_attachment                          = true
  create_transit_gateway_route_table_association_and_propagation = false

  config = {
    dev = {
      vpc_id                            = module.vpc_dev.vpc_id
      vpc_cidr                          = module.vpc_dev.vpc_cidr_block
      subnet_ids                        = module.subnets_dev.private_subnet_ids
      subnet_route_table_ids            = module.subnets_dev.private_route_table_ids
      route_to                          = null
      route_to_cidr_blocks              = null
      static_routes                     = null
      transit_gateway_vpc_attachment_id = null
    }
  }

  context = module.this.context

  providers = {
    aws = aws.dev
  }
}

Frank avatar

module.transit_gateway has a dependency on module.transit_gateway_vpc_attachments_and_subnet_routes_prod and visa versa

Frank avatar

wouldn’t this cause circular dependencies?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are no circular dependencies

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  create_transit_gateway                                         = true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  create_transit_gateway_vpc_attachment                          = true
Frank avatar

thanks for your quick reply! gonna try it out!

Frank avatar
│ Error: Invalid count argument
│ 
│   on .terraform/modules/transit_gateway_vpc_attachments_and_subnet_routes_prod/main.tf line 33, in data "aws_ec2_transit_gateway" "this":
│   33:   count = local.lookup_transit_gateway ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ 
Frank avatar

it doesnt work anymore after version 0.4.1

RB avatar

@Frank hmmm that may be a bug. are you giving it an existing transit gateway id?

https://github.com/cloudposse/terraform-aws-transit-gateway/blob/ef27722d92a9871e47ff9570900ef780f2481a01/main.tf#L10

usually when we see this issue triggered by a string variable type, we change it to a single item list of string to get around this error

Frank avatar

@RB only for module.transit_gateway_vpc_attachments_and_subnet_routes_shared like the multiaccount example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the multi-account example is not automatically deployed to AWS by terratest so it could be out of date

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

dd you try this example https://github.com/cloudposse/terraform-aws-transit-gateway/tree/master/examples/complete - it’s deployed to AWS on each open PR so it should work w/o issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Frank

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in general, in situations like

The "count" value depends on resource attributes that cannot be determined until apply
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can split the TF code into 2 components and apply them separately in two steps

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this is not an easy error to solve, it depends on many-many factors, it depends on all other code and dependencies b/w all your code. In some instances the same code might work, but just adding some other resources/deps could break it)

Frank avatar

thanks @Andriy Knysh (Cloud Posse)! I’ll try to split up the components and see if that works out, havent tried the complete example yet. its a tricky error to solve indeed

Frank avatar

I have tried by creating the transit gateway first, that results in the same errors.

Frank avatar

I’ve solved it by making the transit gateway first.

module "transit_gateway" {
  source                     = "cloudposse/transit-gateway/aws"
  version                    = "0.6.1"
  ram_resource_share_enabled = true
  route_keys_enabled = true

  create_transit_gateway                                         = true
  create_transit_gateway_route_table                             = false
  create_transit_gateway_vpc_attachment                          = false
  create_transit_gateway_route_table_association_and_propagation = false

  context = module.this.context
  providers = {
    aws = aws.shared
  }
}
Frank avatar

transit_gateway => routes and association => vpc attachment

Frank avatar

had to define a depends_on to module.transit_gateway in order to get rid of the count error

2022-03-15

steve360 avatar
steve360

Anyone know if elasticache redis cluster mode can be changed to cluster mode disabled? Not seeing a clear answer in aws docs. We’re trying to do a data migration to a new cluster.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t think so. cluster mode redis is fundamentally different from non-cluster mode redis

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can check with support, this is the exact sort of question they are fantastic at answering

steve360 avatar
steve360

Thanks @Alex Jurkiewicz

2022-03-16

muhaha avatar

Hey, anyone running Fedora CoreOS? Well, its not directly related to FCOS, but how are You handling mapping EBS disks if they are different from instance to instance type ? /dev/xvdb vs /dev/nvme1 for example.. Thanks

jonjitsu avatar
jonjitsu

Anyone have any examples with SSM automation creating an aws_ssm_association with an ‘Automation’ document (not Command)?

2022-03-17

2022-03-18

jose.amengual avatar
jose.amengual

EventBridge question: is it possible to create a policy to allow another event bridge to send events cross account?

jose.amengual avatar
jose.amengual

I have seen example of full account access

jose.amengual avatar
jose.amengual

what

• This feature allows a path to be passed into the module when creating aws_iam_role and aws_iam_policy.

why

• Some company policies have boundary conditions in place that only allow aws_iam_role and aws_iam_policy to be created with a specific naming convention, which includes paths. A name does not allow characters such as forward slashes which may be necessary.

references

• N/A

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Yonatan Koren

what

• This feature allows a path to be passed into the module when creating aws_iam_role and aws_iam_policy.

why

• Some company policies have boundary conditions in place that only allow aws_iam_role and aws_iam_policy to be created with a specific naming convention, which includes paths. A name does not allow characters such as forward slashes which may be necessary.

references

• N/A

1
Yonatan Koren avatar
Yonatan Koren

Approved

2022-03-19

tomkinson avatar
tomkinson

Is anyone experience in resetting CloudFormation? We use it for a media CDN and seems it’s in a bad state but aws only helps paid.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html I worry trying this as it won’t actually resolve the issue because it says the stack will be in an inconsistent state, and I think that is already the case. The error happens in the UPDATE_ROLLBACK_COMPLETE event. I posted about it here https://stackoverflow.com/questions/71380124/how-to-fix-the-following-resources-failed-to-update-cloudfront

An error always occurs when trying to apply the new set of changes, and then rolling back generates the UPDATE_ROLLBACK_* errors.

Continue rolling back an update - AWS CloudFormation

For a stack in the UPDATE_ROLLBACK_FAILED state, fix the error that caused the failure and continue rolling back the update to return the stack to a working state (UPDATE_ROLLBACK_COMPLETE).

How to fix "The following resource(s) failed to update: [Cloudfront]."

Update There are other errors that show up after this error, but it seems clear to me (based on the timestamps, descriptions and affected resources) that they’re just failures that happen because t…

loren avatar

When writing a lambda intended to be invoked by a scheduled event, do you prefer to make it configurable via the event data structure, or via environment variables?

loren avatar

The former supports one lambda function invoked by separate event rules for each configuration. The latter requires a separate lambda function for each configuration…

jose.amengual avatar
jose.amengual

is there a way to force lambda to connect to https endpoints only?

jose.amengual avatar
jose.amengual

I have a lambda that for some reason can only connect to https

jose.amengual avatar
jose.amengual

if you change the config of the endpoint to http still tries to connect to https

jose.amengual avatar
jose.amengual

endpoints within the vpc

2022-03-20

2022-03-21

Milosb avatar

is it possible to use application load balancer controller with selfmanaged Kubernetes cluster (not eks)?

venkata.mutyala avatar
venkata.mutyala

I’ve never done it before but depending on how you did the self-managed install you might need to install some additional drivers so that your cluster is aware it’s running on AWS.

venkata.mutyala avatar
venkata.mutyala

Beyond that, I imagine with the right IAM roles it will just work…

1
azec avatar

Hi there! I’ve got an architectural question that I couldn’t find answer to myself. I have: • Aurora PostgreSQL in VPC A with all DB instances in private subnets, in AWS account 1 • AWS Lambda function in VPC B within private subnets, in AWS account 1 • K8S EKS in VPC C , in AWS account 1 • Transit GW with route propagation and attachments for all 3 VPCs ( A, B, C ), in AWS account 2 The DB needs to trigger lambda using aws_lambda() extensions for PostgreSQL. When triggered, Lambda needs to make REST API call to a Service running in K8S via HTTP. The DNS for that K8S Service is exposed via External DNS as a record in Route53 in a public hosted zone. So basically the LB DNS is resolvable from anywhere (including public Internet, as well as Lambda VPC - pretty much anywhere). The LB itself is however internal and the IPs are within VPC C CIDR range. Based on all existing route tables for private subnets for Lambda ENI attachments, I doubt there will be problems routing traffic from Lambda to that K8S Service (exposed via LB).

However the more challenging part for me is connectivity between DB (in VPC A ) and Lambda (in VPC B ). I was reading docs invoking AWS Lambda function from an Aurora PostgreSQL DB cluster , and they mention that for DB instances in private subnets, there are two approaches: a) Using NAT Gateways b) Using VPC endpoints But they don’t elaborate on how NAT Gateways could be used in this scenario. They just outline the steps needed to accomplish this connectivity by using option (b) Using VPC endpoints While I don’t have problem taking approach (b) , I would like to understand how could I invoke Lambda without public API endpoint even without using (b) , considering I already have routing among all these VPCs established using transit gateway.

If anyone has done something similar or has some hints based on the past experiences, I would appreciate chatting about it! Thanks!

Invoking an AWS Lambda function from an Aurora PostgreSQL DB cluster - Amazon Aurora

Invoke an AWS Lambda function from an Aurora PostgreSQL DB cluster .

azec avatar

Per Lambda VPC updates in 2019 (see: https://go.aws/3JyDitA ), Lambda service still creates ENI in customer VPC when Lambda function has in-VPC configuration, but the IPs of the Hyperplane ENIs managed by AWS are ephemeral. In this architecture invocation of AWS Lambda functions (regardless of the style of invocation) is still a public API call. That means that Aurora PostgreSQL DB instances living in private subnets would still need route to the public internet to invoke Lambda function via public API. This route for DB private subnets does exist, however, it is routed via Transit Gateway in another AWS account. This means that eventually traffic from TGW hosting VPC would exit AWS root account boundaries and Lambda invocation would still happen via the Internet (public Lambda service API endpoints). Additionally, this includes multiple chargebacks for traffic routing between our accounts and eventually for the public API requests (despite being public Lambda service API endpoints). Therefore it still seems that required solution (as advocated by AWS docs linked above) is still to introduce VPC Endpoints for Lambda service inside VPC hosting Aurora PostgreSQL DB instances.

Announcing improved VPC networking for AWS Lambda functions | Amazon Web Servicesattachment image

September 8, 2021: Amazon Elasticsearch Service has been renamed to Amazon OpenSearch Service. See details. Update – August 5, 2020: We have fully rolled out the changes to the following additional Regions to those mentioned below. These improvements are now available in the AWS China (Beijing) Region, operated by Sinnet and the AWS China (Ningxia) […]

azec avatar

Ended up doing something like this … but still need to verify everything will work….

# Below module deploys VPC Endpoint (of type Interface) for invoking AWS Lambda service via private network (uses AWS PrivateLink) from within
# the private subnets in which Aurora PostgreSQL DB instances are deployed (in DB VPC). Despite this being set of networking resources,
# we opted-out from placing this module in Terraform profiles/vpc for VPC, because having VPC Endpoints for Lambda service is not requirement in each of our VPCs.
# It is so far only a requirement for a VPC hosting Aurora PostgreSQL DBs.
module "lambda_service_vpc_endpoints" {
  source  = "cloudposse/vpc/aws//modules/vpc-endpoints"
  version = "0.28.1"

  context = module.this.context

  interface_vpc_endpoints = tomap({
    "lambda" = {
      # Required
      # NOTE: Regarding 'name' attribute below, we have to feed only AWS service name as it relies on data source 'aws_vpc_endpoint_service' to lookup regional service endpoint.
      # See this for reference:
      #  1) <https://github.com/cloudposse/terraform-aws-vpc/blob/0.28.1/modules/vpc-endpoints/main.tf#L11-L15>
      #  2) <https://github.com/cloudposse/terraform-aws-vpc/blob/0.28.1/modules/vpc-endpoints/main.tf#L49>
      # name = "com.amazonaws.${var.region}.lambda"
      name = "lambda"
      # NOTE: Regarding 'security_group_ids' attribute below - the default SG for VPC allows all inbound and outbound traffic
      #       which is desirable. For now, we don't want to control access for individual and specific Lambda functions (via ARNs)
      #       to Lambda service VPC endpoint interfaces via Security Groups.
      security_group_ids  = [var.vpc_resources.default_security_group_id]
      private_dns_enabled = true

      # Optional
      # NOTE: Regarding 'policy' attribute below - since the default is to allow full access to the endpoint service (Lambda),
      #       we will rely only on RDS IAM Role for control.
      # DOCS:
      #       1) <https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html#vpc-endpoint-policies>
      #       2) <https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc-endpoints.html#vpc-endpoint-policy>
      policy     = null
      subnet_ids = var.vpc_resources.private_subnet_ids
    }
  })
  vpc_id = var.vpc_resources.vpc_id
}
Alan Kis avatar
Alan Kis

Bit late here, but in short - by having either NAT gateway or VPC endpoints you are basically routing your request from resource in private subnet to AWS Lambda endpoint.

Which is either reachable using public route (NAT Gateway) - or using private route - traffic never leaves the VPC using VPC endpoint.

2022-03-22

Gabriel avatar
Gabriel

Hi there, I was wondering if any of you use spot instances for EKS worker nodes? If yes, do you manage it by yourself or do you use some software? I see, for example, this OS project https://github.com/cloudutil/AutoSpotting I wonder if a paid service is worth it compared to an open source solution?

cloudutil/AutoSpotting

Saves up to 90% of AWS EC2 costs by automating the use of spot instances on existing AutoScaling groups. Installs in minutes using CloudFormation or Terraform. Convenient to deploy at scale using StackSets. Uses tagging to avoid launch configuration changes. Automated spot termination handling. Reliable fallback to on-demand instances.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Introducing Karpenter – An Open-Source High-Performance Kubernetes Cluster Autoscaler | Amazon Web Servicesattachment image

Today we are announcing that Karpenter is ready for production. Karpenter is an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. It helps improve your application availability and cluster efficiency by rapidly launching right-sized compute resources in response to changing application load. Karpenter also provides just-in-time compute resources to meet your application’s needs and […]

1
momot.nick avatar
momot.nick

I have Windows image that I use for hosting IIS on EC2

Recently I’ve been trying to automate the image build using packer, which managed to build a Windows AMI with IIS installed and setup on it.

However, launching this AMI seems to make the instance unreachable - the EC2 system log is blank and SSM no longer recognizes the instance.

Has anyone had this issue before?

I had to add a user_data_file to the packer build to get WinRM to connect during the build, I suspect this is where the issue stems from but I haven’t been able to to figure out why

tomas avatar

Hello, I want to ask why I see dkim ses records in route53. I not using ses or any domain validation. Could you please advice?

Bhavik Patel avatar
Bhavik Patel

I’m currently storing my .env variables for my ECS instance via an S3 file. We just added in an RSA key and it’s a huge pain in the butt to include this as an inline value or pad it with new lines. Does anyone have any other recommendations?

I was thinking about storing it into SSM and have the value pulled in programmatically with boto3

Michael Galey avatar
Michael Galey

ssm + https://github.com/segmentio/chamber is great, chamber exec dev1/applicationname — python main.py would pull all those env vars within that session, without leaving anything local on the instance

segmentio/chamber

CLI for managing secrets

Bhavik Patel avatar
Bhavik Patel

Is this package necessary? Seems like ECS is able to pull in SSM vars if permissions are setup properly

Michael Galey avatar
Michael Galey

maybe so, there can be a lot of situations it’s useful

Matt Gowie avatar
Matt Gowie

Chamber is great if you want portability and want to be able to move your workloads to K8s one day or something along those lines but don’t want to configure them differently.
I was thinking about storing it into SSM and have the value pulled in programmatically with boto3
Why use boto3 if you can ref the environment variable or secret in the task def (which you mentioned above)? That is the way to do it properly.

Bhavik Patel avatar
Bhavik Patel


Why use boto3 if you can ref the environment variable or secret in the task def (which you mentioned above)? That is the way to do it properly.
- not planning on using boto anymore

2
Bhavik Patel avatar
Bhavik Patel

Do any of you guys have experience doing this within a Private VPC? Apparently when you use the latest Farget launch type, 1.4.0, AWS has changed the networking model on this version and its necessary for me to either make the instance public (not possible with our configuration) or use a service like AWS PrivateLink

DaniC (he/him) avatar
DaniC (he/him)

Hi folks, anyone around is in a situation where it needs to run from a central place (multiple distributed teams) various sql queries against many rds instances scattered between few regions ?

One option am considering is maybe to have a bastion/ cloud9 in a vpc peered with all the other vpcs so we can have access to RDS private subnet. Trouble is how do i auth folks etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What are the queries doing?

DaniC (he/him) avatar
DaniC (he/him)

various read selects for Ops/ Dev folks to lookup data should they need to dig into the prob

azec avatar

It seems that RDS module is not very flexible when it comes to iam_roles handling (see https://github.com/cloudposse/terraform-aws-rds-cluster/blob/0.50.2/main.tf#L86) when trying to bring your own IAM roles associations after cluster is created using aws_rds_cluster_role_association resource.

The docs on aws_rds_cluster resource do have this note:

NOTE on RDS Clusters and RDS Cluster Role Associations Terraform provides both a standalone RDS Cluster Role Association - (an association between an RDS Cluster and a single IAM Role) and an RDS Cluster resource with iam_roles attributes. Use one resource or the other to associate IAM Roles and RDS Clusters. Not doing so will cause a conflict of associations and will result in the association being overwritten. I have situation where we are not even passing iam_roles var to cloudposse/rds-cluster/aws:0.46.2 but I did add new IAM Role with permisisons to trigger Lambda from PostgreSQL and then associating that role with RDS cluster using aws_rds_cluster_role_association resource. The resulting situation is:

  1. On first apply, association is fine, I see IAM role was added to RDS cluster and aws_rds_cluster_role_association resource was stacked.
  2. On the consecutive apply, RDS cluster sees a change - from the real infrastructure it picks up association, but since iam_roles is a bad default ( empty list [] ), TF computes that as a - change and wants to tear down the IAM Role association. The aws_rds_cluster_role_association resource doesn’t render any changes in the plan (it still exists). Proceeding with apply on this, it fails with:
    Error: DBClusterRoleNotFound: Role ARN arn:aws:iam::<REDACTED>:role/<REDACTED> cannot be found for DB Cluster: <REDACTED>. Verify your role ARN and try again. You might need to include the feature-name parameter.
     status code: 404, request id: 039e4bb0-6091-4c19-9b7d-e63472ec859e
    
azec avatar

However, if we do pass iam_roles … like this….

iam_roles = [
  aws_iam_role.rds_lambda_invoke_feature_role.arn
]

we get another error:

Error: Cycle: module.rds_cluster.aws_rds_cluster.secondary, module.rds_cluster.aws_rds_cluster.primary, module.rds_cluster.output.cluster_identifier (expand), aws_iam_role.rds_lambda_invoke_feature_role, module.rds_cluster.var.iam_roles (expand)

So something is not quite right here how iam_roles are handled inside this module at all.

RB avatar

this might be worth a ticket @azec please create one in the repo with all of the necessary module inputs and we’ll investigate further

azec avatar

Where can I open a ticket @RB? I have never done this, mostly opened some PRs directly ….

azec avatar

Ah gotcha!

2022-03-23

2022-03-24

Andy avatar

Does anyone have a recommendation for how they size their subnets for an EKS cluster? e.g for a /19 in us-east-1 with 3x AZs I was considering using something like:

# Only really going to have one public NLB here
10.136.0.0/25.  - public. - 126 hosts
10.136.0.128/25 - public. - 126
10.136.1.0/25.  - public. - 126
10.136.1.128/25 - spare

10.136.2.0/23.  - spare

10.136.4.0/22.  - private - 1,022 hosts
10.136.8.0/22.  - private - 1,022
10.136.12.0/22. - private - 1,022
10.136.16.0/22  - spare

10.136.20.0/24. - db      - 254
10.136.21.0/24  - db      - 254
10.136.22.0/24. - db      - 254
10.136.23.0/24  - spare

10.136.24.0/24. - elasticache - 254
10.136.25.0/24  - elasticache - 254
10.136.26.0/24  - elasticache - 254
10.136.27.0/24. - spare
10.136.28.0/22. - spare

My approach here was just trying to get big private subnets for the EKS nodes, and then fit in the other subnets around this.

Andy avatar

As a secondary question is there a nice tool for calculating multiple subnet sizes given a starting VPC CIDR?

Lucky avatar


As a secondary question is there a nice tool for calculating multiple subnet sizes given a starting VPC CIDR?
https://tidalmigrations.com/subnet-builder/ might be able to help you here

VPC Subnet Builderattachment image

Designing a new AWS VPC, Google VPC or Azure VNET? This subnet builder lets you graphically plan your cloud IP Address space with ease.

Andy avatar

Thanks @Lucky just what I was after

Alex Jurkiewicz avatar
Alex Jurkiewicz

Why a /19? Start with like a whole /16 and you won’t have nearly so much worry about running out of space

Andy avatar

@Alex Jurkiewicz - it’s a company policy thing to use /19

Wil avatar

Howdy everybody, I’m using the cloudposse/terraform-aws-ec2-instancemodule.

It’s working great, however I’d like to turn on instance_metadata_tags inside metadata_options.

Anyone have a suggestion on how I could go about this?

Defaults leave me with this in terraform plan

      + metadata_options {
          + http_endpoint               = "enabled"
          + http_put_response_hop_limit = 2
          + http_tokens                 = "required"
          + instance_metadata_tags      = "disabled"
        }
Wil avatar

Aw screw it, filed a pull request to fix it. https://github.com/cloudposse/terraform-aws-ec2-instance/pull/122

what

• This adds in an option to turn on the metadata_tags (or off, the default) inside the aws_instance metadata_options.

why

• There are options for http_endpoint, http_put_response_hop_limit and http_tokens already but not for metadata_tags. This adds that functionality.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Wil, one comment

what

• This adds in an option to turn on the metadata_tags (or off, the default) inside the aws_instance metadata_options.

why

• There are options for http_endpoint, http_put_response_hop_limit and http_tokens already but not for metadata_tags. This adds that functionality.

Wil avatar

Ah, I did that 3 times!

Wil avatar

Pushed the changes, thank you.

Wil avatar

One note, that enables this feature which is a change in functionality. I should change it to default to off so there’s an option to enable it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes please set it to false by default

Wil avatar

Some workflows stuff came in from the fork, would you like me to leave that?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the GH workflows are auto-updated

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

GH actions are having issues now, so we’ll wait for the tests to start and finish, then will merge

maarten avatar
maarten

Hello everyone. Does anyone know if conditions likekms:*ViaService* work cross-account for KMS ? It doesn’t seem to work. ( Context is I have a central KMS acct with EKS in other accounts. Current policies are opening to arn::::root but would like to tighten it more. )

Alan Kis avatar
Alan Kis

It works, at least for the scenario where KMS key is used to encrypt the AWS Backup Vault in centralized account. I could strip to OU instead of using * for principal but I would need to test that.

 statement {
    sid = "Allow attachment of persistent resources"
    actions = [
      "kms:CreateGrant",
      "kms:ListGrants",
      "kms:RevokeGrant"
    ]
    resources = ["*"]

    principals {
      type        = "AWS"
      identifiers = ["*"]
    }

    condition {
      test     = "StringLike"
      variable = "kms:ViaService"
      values = [
        "rds.${local.region}.amazonaws.com",
        "backup.${local.region}.amazonaws.com"
      ]
    }

    condition {
      test     = "Bool"
      variable = "kms:GrantIsForAWSResource"
      values   = [true]
    }

That effectively allows usage of KMS key in centralized backup account to other accounts.

2022-03-25

2022-03-28

Thomas Hoefkens avatar
Thomas Hoefkens

Hello everyone, do you have any idea how Cognito can return the original request URL to our Angular app?

• A user receives an email with a deep link to our web app protected by Cognito e.g. https://webapp.com/reporting?reportingId=123

• The user clicks the link and is redirected to Cognito: the URL then looks like https://auth.webapp.com/login?client_id=xxx&response_type=code&redirect_uri=https://webapp.com/auth#/reporting?reportingId=123

• After entering UserId and Pass on the Cognito-provided login screen, I can see a first a request is done against this URL: https://auth.webapp.com/login?client_id=xxx&response_type=code&redirect_uri=https://webapp.com/auth As you may notice, the real deep link is already lost in step3 and then not passed on in the next step to https://webapp.com/auth?code=56565ab47-xxxx

Could you point me to how getting the original redirect URI to work and to take the user back to the deep link?

azec avatar

hey @Thomas Hoefkens! This is some very deep stuff and unfortunately I haven’t worked with Cognito (yet). You might have better luck with posting your question in https://repost.aws/, specifically under following topics:

  1. Serverless
  2. Security Identity & Compliance
  3. https://repost.aws/tags/TAkhAE7QaGSoKZwd6utGhGDA/amazon-cognito
Get expert technical guidance from community-driven Q&A - AWS re:Post

re:Post is the only AWS-managed Q&A community that provides expert-reviewed answers to help with AWS technical questions.

Find Answers to AWS Questions about Amazon Cognito | AWS re:Post

Browse through Amazon Cognito questions or showcase your expertise by answering unanswered questions.

Aumkar Prajapati avatar
Aumkar Prajapati

Hey all, dealing with a rather strange issue with our KOPS based kubernetes cluster’s autoscaling group in AWS. Basically our ASG itself is terminating nodes as a form of balancing it’s trying to do between AZ. The thing is, AWS is terminating these nodes which means whatever is running on those nodes is basically shut down and restarted which is not ideal because this is a prod cluster. Our cluster-autoscaler already runs in the cluster which occasionally scales things up and down in a controlled manner while AWS is doing it’s own form of balancing that appears to be more reckless with our cluster.

Here’s an example of the error, any ideas on what could be causing this? This seems isolated to one cluster only:

At 2022-03-28T07:57:29Z instances were launched to balance instances in zones ca-central-1b ca-central-1a with other zones resulting in more than desired number of instances in the group. At 2022-03-28T07:57:49Z an instance was taken out of service in response to a difference between desired and actual capacity, shrinking the capacity from 33 to 32. At 2022-03-28T07:57:49Z instance i-xxx was selected for termination.	
Alex Jurkiewicz avatar
Alex Jurkiewicz

If you have your own auto scaling logic working, you probably want to disable the ASG’s equivalent logic (Launch and Terminate). https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html

Suspending and resuming a process for an Auto Scaling group - Amazon EC2 Auto Scaling

Suspend and then resume one or more of the standard processes that are built into Amazon EC2 Auto Scaling.

Aumkar Prajapati avatar
Aumkar Prajapati

Yeah we ended up suspending the AZRebalance operation, looking into why this is happening in the first place though and not our other clusters

Alex Jurkiewicz avatar
Alex Jurkiewicz

The most common reason is that AWS don’t have capacity of the instance types you request, and one AZ is floating close to zero capacity. As you scale up there’s no free capacity in one AZ, due to other customer’s pre-existing workloads. Later one, those workloads are stopped, and the rebalance occurs.

Simplest solution is to add more instance types to the ASG, for example, both m6g and m5 and m4

2022-03-29

2022-03-31

Balazs Varga avatar
Balazs Varga

hello all, do you know ay s3 maintenance tool running in k8s? WE have few folder that we would like to delete after x days. using CLI I can write a script but better if there is a tool maybe with helm chart. thanks

Josh Holloway avatar
Josh Holloway

Can you not use a lifecycle policy?

Josh Holloway avatar
Josh Holloway
Managing your storage lifecycle - Amazon Simple Storage Service

Use Amazon S3 to manage your objects so that they are stored cost effectively throughout their lifecycle.

Balazs Varga avatar
Balazs Varga

no because mfa delete protection has been set on that bucket. that is my isue, that’s why etcd manager cannot delete old backups

Balazs Varga avatar
Balazs Varga

fyi… just created a cronjob that uses aws cli and with that I can delete the files W/o iisue

Balazs Varga avatar
Balazs Varga

issue

Balazs Varga avatar
Balazs Varga

we use serverless v1. the version is really old (2.7.01). Is there a way to use the lastest 2.10.02 ?

Johnmary avatar
Johnmary

Hello everyone. Please I need help. I have a terraform script that creates a mongodb cluster and nodes, but I have issue, when I want change my node from a region to another or remove a node from a region. example like I have 3 nodes in 3 region but now wants only 3 nodes in 2 regions, the terraform would want to delete the whole cluster and create a new one based on the update, at such I will loss data. I want terraform to be able to do that update without destroying the cluster. Any help will be appreciated.

jose.amengual avatar
jose.amengual

you can’t change the region of a cluster without destroying it

jose.amengual avatar
jose.amengual

a region is like another Datacenter

Johnmary avatar
Johnmary

Is it not possible to only delete the cluster but only the node in that region or by adding to another region.

jose.amengual avatar
jose.amengual

I do not think so

jose.amengual avatar
jose.amengual

Regional products are created on that region

jose.amengual avatar
jose.amengual

the global cloud is a myth

kareem.shahin avatar
kareem.shahin

Anyone ever tried using DMS or any other AWS offering to make a copy of a prod database with obfuscated and redacted data?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I would be interested to know too

kareem.shahin avatar
kareem.shahin

Right? Alternative is just snapshots and scripting to achieve this but a managed solution would be appealing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, would love that. it would be really nice for our customers. inevitably the obfuscation would be a bit custom, but would like to have a turnkey pattern we could implement.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, anyone who has tried using DMS with terraform will know it’s a real PIA. Can’t modify running jobs and no way to natively stop jobs in terraform.

    keyboard_arrow_up