#aws (2021-09)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS) Archive: https://archive.sweetops.com/aws/

2021-09-19

Ozzy Aluyi avatar
Ozzy Aluyi

Hi All, anyone know why my targets are stuck?

Ozzy Aluyi avatar
Ozzy Aluyi
Target registration is in progress
Ozzy Aluyi avatar
Ozzy Aluyi

it’s been trying to register for over and hour now.

Ozzy Aluyi avatar
Ozzy Aluyi

any fix/solution will be appreciated.

venkata.mutyala avatar
venkata.mutyala

Are the health checks passing?

venkata.mutyala avatar
venkata.mutyala

Have you spot checked the health checks as being valid/working?

Ozzy Aluyi avatar
Ozzy Aluyi

currently what it looks like

Ozzy Aluyi avatar
Ozzy Aluyi

it was failing earlier.

Ozzy Aluyi avatar
Ozzy Aluyi

now it is stuck on registering

venkata.mutyala avatar
venkata.mutyala

I would suggest reaching out to their support if you haven’t already. Likely they will be able to spot the problem easily. Could very well be on their end too.

2021-09-17

Shreyank Sharma avatar
Shreyank Sharma

Hi,

Is it possible to add custom endpoint to AWS Kinesis Signalling Stream endpoint(kinesis.us-east-1.amazonaws.com),

Tried installing a nginx in an ec2 instance and tried to reverse proxy pointing (customendpoint -> kinesis.us-east-1.amazonaws.com) and used certbot to issue certificate to my custom endpoint but the app is giving https://<custom-domain>/describeSignalingChannel 404 (Not Found)

Thanks

2021-09-16

Antarr Byrd avatar
Antarr Byrd

I’m trying to try out Kinesis using CloudFormation. I’m getting failed invocations when my scheduler invokes the Lamba. But nothing is showing up in Cloudwatch logs. Any ideas how to handle/fix this?

AWSTemplateFormatVersion: "2010-09-09"
Description: "Template for AWS Kinesis resources"
Resources:
  DataStream:
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: 1
      RetentionPeriodHours: 24
      Name: !Sub ${AWS::StackName}
  Lambda:
    Type: AWS::Lambda::Function
    Properties:
      Role: !Sub arn:aws:iam::${AWS::AccountId}:role/service-role/lambda_basic_execution
      Runtime: python3.6
      FunctionName: !Sub ${AWS::StackName}-lambda
      Handler: index.lambda_handler
      Code:
        ZipFile: |
          import requests
          import boto3
          import uuid
          import time
          import json
          import random

          def lambda_handler(event, context):
            client = boto3.client('kinesis', region_name='${AWS::Region}')
            partition_key = str(uuid.uuid4())
            response = requests.get('<https://randomuser.me/api/?exc=login>')
            if response.status_code == 200:
              data = json.dumps(response.json())
              client.put_record(
                StreamName='{AWS::StackName}',
                Data=data,
                PartitionKey=partition_key
              )
              print ("Data sent to Kinesis")
            else:
              print('Error: {}'.format(response.status_code))
  Schedule:
    Type: AWS::Events::Rule
    Properties:
      ScheduleExpression: "rate(1 minute)"
      State: ENABLED
      Targets:
        - Arn: !GetAtt Lambda.Arn
          Id: "TargetFunctionV1"
          Input: '{}'
  LogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: !Sub /aws/lambda/${AWS::StackName}-lambda
      RetentionInDays: 7
  LogStream:
    Type: AWS::Logs::LogStream
    Properties:
      LogGroupName: !Ref LogGroup
      LogStreamName: !Sub /aws/lambda/${AWS::StackName}-lambda
  PermissionsForEventsToInvokeLambda:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt Lambda.Arn
      Action: lambda:InvokeFunction
      Principal: events.amazonaws.com
      SourceArn: !GetAtt DataStream.Arn
Alex Jurkiewicz avatar
Alex Jurkiewicz

You checking the whole log group?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You are creating a lot stream but Lambda won’t use that

2021-09-15

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have a clean way of authenticating (via kubectl) to EKS when using Azure AD as the OIDC identity provider? not sure if people have hooked up Dex with Gangway to provide a UI for obtaining them?

Andrea Cavagna avatar
Andrea Cavagna

We have an open enhancement in Leapp. maybe it can help you: https://github.com/Noovolari/leapp/issues/170

Add support for kubeconfig (CLI) integration · Issue #170 · Noovolari/leapp attachment image

Is your feature request related to a problem? Please describe. I am a user of kubernetes and of kubectl and eks. At present, kubectl references the aws binary for authentication, which expects cert…

Zach avatar

Odd, I’m using kubectl and leapp just fine right now oh, this is to have kubectl ask leapp directly. Huh.

Eric Villa avatar
Eric Villa

Hi @! May I ask you how you have federated Azure AD to AWS?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know if there is a recommended approach to alert on failed RDS snapshot to s3 exports?

Matthew Bonig avatar
Matthew Bonig

CloudWatch Events?

AugustasV avatar
AugustasV

Lambda functions to send sns notification to communication channel like teams or slack?

2021-09-14

AugustasV avatar
AugustasV

Try to describe instances /usr/local/bin/aws ec2 describe-instances –instance-ids i-sssf –region –output text –debug and got that

nmkDIykR/VMOgP+bBmVRcm/QWkCbquedU53R9SAv9deDrjkWkLKuPEnHgu57eGq55K1nFTAVhJ2IG5u5C2IuNKCskgAqz6+JH5fMdlAhYtAzw6FTv+YTi9DFhJaBA9niDk+n2lNhtx/iIbDRNGGCrMXuQbU5hPeHy8ijY6g==', 'Authorization': b'AWS4-HMAC-SHA256 Credential=ASIAUXKPUFZ7UOBXM3GN/20210914/eu-west-1/ec2/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-security-token, Signature=a8d69a78cbf6ac49ba9cc7774d5e9625ec8a2843e7eedeaba2630da7a4a41e1f', 'Content-Length': '76'}>
2021-09-14 14:34:51,592 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): ec2.eu-west-1.amazonaws.com:443

it’s private EC2 instance, why can’t get the output?

netstat -tnlp | grep :443
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      1013/nginx: maste
jose.amengual avatar
jose.amengual

output? what do you mean?

AugustasV avatar
AugustasV

I mean when I run aws ec2 describe instance command, I would like to get result

jose.amengual avatar
jose.amengual

do you have a firewall or something that could be blocking connections?

AugustasV avatar
AugustasV

I think the problem is that it’s private ec2 instance right? Doesn’t have public IP address. Instance metadata received

<http://169.254.169.254/latest/meta-data/>

By using curl command got result

jose.amengual avatar
jose.amengual

the instance should have internet and it should be able to hit the api

jose.amengual avatar
jose.amengual

it has nothing to do with the public ip

jose.amengual avatar
jose.amengual

but usually to get metadata from within an instance you use this address http://169.254.169.254/latest/meta-data/

jose.amengual avatar
jose.amengual

no need to run the cli for that

2021-09-13

Alyson Franklin avatar
Alyson Franklin

Hi all right with you? Do you know if there is any web application to make it easier to navigate AWS S3?

pjaudiomv avatar
pjaudiomv

easier in regards to what, you mean like for a public bucket?

Alyson Franklin avatar
Alyson Franklin

Yes! I need to release AWS S3 to people in the marketing department

Alyson Franklin avatar
Alyson Franklin

These are people who have no technical knowledge and they need to have the option to download a full AWS s3 folder

pjaudiomv avatar
pjaudiomv

theres this which lets you browse a bucket https://github.com/awslabs/aws-js-s3-explorer

GitHub - awslabs/aws-js-s3-explorer: AWS JavaScript S3 Explorer is a JavaScript application that uses AWS's JavaScript SDK and S3 APIs to make the contents of an S3 bucket easy to browse via a web browser. attachment image

AWS JavaScript S3 Explorer is a JavaScript application that uses AWS&#39;s JavaScript SDK and S3 APIs to make the contents of an S3 bucket easy to browse via a web browser. - GitHub - awslabs/aws-…

Alyson Franklin avatar
Alyson Franklin

I recently tested AWS s3 explorer, but it doesn’t have the option to download a full folder.

https://github.com/awslabs/aws-js-s3-explorer

pjaudiomv avatar
pjaudiomv

what about using a app like cyberduck or something like that

2
pjaudiomv avatar
pjaudiomv

this guy seems to have a fork where you can select multiple files https://github.com/awslabs/aws-js-s3-explorer/pull/86

V2 alpha by matthew-ellis · Pull Request #86 · awslabs/aws-js-s3-explorer attachment image

Issue #, if available: Description of changes: Add download button to header (only shows when items are selected) Enable download of multiple files at once in a ZIP folder - select items and click…

1
Alyson Franklin avatar
Alyson Franklin

It’s working perfectly. Thanks a lot for the help!

1

2021-09-11

2021-09-10

Adnan avatar
Adnan

Hi People, Wanted to ask about experiences upgrading kubernetes eks versions. I recently did an upgrade from 1.19 to 1.20. After the upgrade some of my workloads are experiencing weird high cpu spikes. But correlation does not equal causation so I wanted to ask if anyone here experienced something similar.

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

The only change I can think of, that can cause this is docker deprecation: https://kubernetes.io/blog/2020/12/02/dockershim-faq/

But that’s not included to the 1.20 by default, you should do it separately in a node group. So if you followed the release notes and did it - must be it.

Dockershim Deprecation FAQ

This document goes over some frequently asked questions regarding the Dockershim deprecation announced as a part of the Kubernetes v1.20 release. For more detail on the deprecation of Docker as a container runtime for Kubernetes kubelets, and what that means, check out the blog post Don’t Panic: Kubernetes and Docker. Why is dockershim being deprecated? Maintaining dockershim has become a heavy burden on the Kubernetes maintainers. The CRI standard was created to reduce this burden and allow smooth interoperability of different container runtimes.

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

Other than that the k8s version itself (the control plane) has no effect on workload resource consumption, it’s involved only during CRUD of the yamls.

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

It must be something else - the AMI version of a worker, the runtime, instance type of a worker, and so on

2021-09-09

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

before I start writing my own … does anyone know of a lambda that takes an RDS snapshot and ships it to S3?

Matthew Bonig avatar
Matthew Bonig

wild, I’m doing that right now.

Matthew Bonig avatar
Matthew Bonig

mysql? postgres? sqlserver? oracle?

Matthew Bonig avatar
Matthew Bonig

rds snapshot or a db-native (like pg_dump or mysqldump)?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

@ mysql and rds snapshot

Matthew Bonig avatar
Matthew Bonig

gotcha. So what I ended up doing was writing a lambda that did the dump (using pg_dump) and then streamed that to a bucket. I then have another process that reads that object from the bucket and restores it in another database. Nothing is packaged up nicely for distribution yet, but so far it seems to be working ok.

Matthew Bonig avatar
Matthew Bonig

The plan is to have that Lambda be part of a state machine that will backup and restore any database requested from one env to another.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

nice thats pretty cool

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

the dump into S3 makes sense as a lambda

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

what you write it in?

Matthew Bonig avatar
Matthew Bonig

only concern is that you can’t get the entire database in 15 minutes =-/

Matthew Bonig avatar
Matthew Bonig

nodejs

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

you can’t get it?

Matthew Bonig avatar
Matthew Bonig

yes, I can in my case. I was just saying my only concern about using a lambda is that the database is so big that it couldn’t be dumped and uploaded to s3 within 15 minutes.

Matthew Bonig avatar
Matthew Bonig

generally the runs I’ve been doing in a fairly small database were getting done in just a few minutes (with an 800mb dump file) so we’ll probably be fine. But if you’re trying to do this for some 3 tb database, you’re going to have a bad time with Lambdas

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Did you notice much size difference between a MySQL dump to S3 and just taking a snapshot @ ? My boss seems to think a dump is over engineering it for some reason

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I’ll paste you his points here when I’m at my laptop in about 15 mins if you’re still around

Steve Wade (swade1987) avatar
Steve Wade (swade1987)


We wont be retaining the shapshots. The RDS snapshot we should use is the automated ones, which we have to keep for contractual and compliance reasons anyway, so there is no additional cost.

Moving this into a sqldump would be over engineering in my view, RDS has the ability to export a snapshot directly to S3 so why reinvent the wheel? Lets face it AWS are pretty good at this stuff, so we should leverage their backup tooling where possible.

The snapshots moved into S3 will need to be retained indefinitely due to the contractual wording…this is being worked on, but wont change any time soon.

I also dont want to have a split between HelmReleases and TF. If we can manage this all in one place (which we can) it feels better than splitting it out. As a consumer having to deploy the infra, then also deploy a HelmRelease feels clunky. Where as deploying just the RDS instance and its backup strategy as a single unit would be more intuitive.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I proposed using a CronJob in our EKS clusters to facilitate the backup

Matthew Bonig avatar
Matthew Bonig

In my base, postgres, so pg_dump. But, that was pretty large since totally uncompressed SQL.

I had looked into using snapshot sharing, but since the db was encrypted with a KMS I couldn’t share with the other account, I couldn’t ship it that way.

Should have looked more for the native s3 integration, but didn’t. Will look now. I don’t know how the encryption would work though. I would assume shipping it to s3 keeps the data encrypted (and needing the same key as the RDS instance)

Matthew Bonig avatar
Matthew Bonig

I use cronjobs in a cluster to backup a mysql and mongodb. Works great.

Matthew Bonig avatar
Matthew Bonig

oh man, totally should have done this s3 export.

Jim Park avatar
Jim Park

I wrote one for Elasticache, to ship elasticache snapshot to another account and restore it. I’ll put together a gist for you. It’s not RDS, but there may be similar semantics.

Jim Park avatar
Jim Park

Actually, scratch that. I haven’t completed open sourcing it, apologies about the false start.

mikesew avatar
mikesew

RDS Q: I made a storage modification, but accidentally set to apply-in-maintenance window. How can I turn around and force it to apply-immediately? I’m in storage-full status.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Make another change and tell it to apply immediately

mikesew avatar
mikesew

thanks @ , aws support pretty much told me the same thing. they said to do it via CLI, not console.

aws rds modify-db-instance  \
  --db-instance-identifier my-db-instance-01  \
  --allocated-storage 200  \
  --max-allocated-storage 500  \
  --apply-immediately;
1
jason einon avatar
jason einon

hwy, not sure to post here or terraform… anyone been ale to create a rds read replica in a different vpc via terraform…i have been stuck on this for a fewdays… getting the error:

Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs.

jason einon avatar
jason einon

i am able to apply the desired config through the console but no through Terraform sadly

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform the db instance and ec2 security group are in different vpcs

i am trying to create a vpc with public and private subnet along with Aurora mysql cluster and instance in same vpc with custom security group for RDS. i’ve created vpc (public/private subnet, cus…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like one of them is using the default VPC

2021-09-08

Santiago Campuzano avatar
Santiago Campuzano

Does anyone know if it’s possible to reserve/allocate a small pool of consecutive Public IP/ELastic IP Addresses on AWS ? I’ve been searching documentation with no luck

mikesew avatar
mikesew

Has anybody used AWS Config Advanced Queries? basically, pulling aws describe data using SQL) i’m trying to pull config data using the aws cli, then throw it into a CSV or some other datastore for querying.

aws configservice  select-aggregate-resource-config  \
--configuration-aggregator-name AllAccountsAggregator  \
--expression "
SELECT
  resourceId,
  resourceName,
  resourceType,
  tags,
  relationships,
  configuration.storageEncrypted,
  availabilityZone
WHERE
  resourceType = 'AWS::RDS::DBInstance'
  AND configuration.engine = 'oracle-ee'
  AND resourceName = 'rds-uat-vertex-9'
" \
| jq -r '.'

.. i’m having problems parsing the outputs. this is mainly a JQ problem.

ccastrapel avatar
ccastrapel

We should chat tomorrow, I have some code in ConsoleMe that parses the nested json config returns

David avatar
David

When using shield, is it best to put protections on a Route53 zone, or an ALB that that zone connects to, or both?

And then the same question with Route53 pointing to CloudFront, API Gateway, etc.

2021-09-07

Almondovar avatar
Almondovar

hi guys, is it any possible way to automate the enablement of ec2 console cable connection in every new ec2 i spin? the commands i am executing for ubuntu instances are the following:

sudo -i
vi /etc/ssh/sshd_config // and go down to edit line 
passwordAuthentication yes 
// saving with :wq!

systemctl restart sshd
passwd // input password 2 times
Grummfy avatar
Grummfy

you can play with the cloud-init or user data section of your instance

Alex Jurkiewicz avatar
Alex Jurkiewicz

Does the virtual console really use sshd??

Alex Jurkiewicz avatar
Alex Jurkiewicz

I would assume a virtual console is using a tty, and bypassing ssh

Carlos Tovar avatar
Carlos Tovar

@ yeah, the ec2 serial console is serial access, not ssh. A handful of ec2 AMIs come preconfigured forit (e.g. amazon linux and i think ubuntu 20). You also need to turn the service at the AWS account level and use an IAM role/user permissioned to use the service.

Almondovar avatar
Almondovar

Hi Carlos, do i understand well that the steps i performed are not necessary to enable cloud console connnection? tbh once i followed them they instantly allowed access to the console connection

Carlos Tovar avatar
Carlos Tovar

@ hey, missed your IM, yes, that is my understanding. But if the changes made worked, then even better

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Does anyone know of anything similar to https://github.com/sportradar/aws-azure-login but written in Go?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you checked out using Leapp instead?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Leapp - One step away from your Cloud attachment image

Leapp grants to the users the generation of temporary credentials only for accessing the Cloud programmatically.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We used to use all kinds of scripts, hacks, and tools but leapp has replaced them for us

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s an open source electron app distributed as a single binary

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

interesting @Erik Osterman (Cloud Posse) you just use the free one?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup

Andrea Cavagna avatar
Andrea Cavagna

Leapp is free for anyone, it’s an open source projects, we are going to close the federation with AzureAD and AWS pull request this week and having a release, for any further question, @ feel free to text me :)

Andrea Cavagna avatar
Andrea Cavagna

The only paid solution by now is the enterprise support to the opensource project, made by the maintainers of the app

Btw @Erik Osterman (Cloud Posse) I grant you that in the next weeks I will partecipate to an office hour so we can respond to any question about Leapp!

2
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I currently have a script (see below) but it seems a little hacky …

#! /usr/bin/env bash

AWS_PROFILE=${1}

AZURE_TENANT_ID="<redacted>"
AZURE_APP_ID_URI="<redacted>"
AZURE_DEFAULT_ROLE_ARN="arn:aws:iam::<redacted>:role/platform-engineer-via-sso"
AZURE_DEFAULT_DURATION_HOURS=1

# Make sure user has necessary tooling installed.
if ! which ag > /dev/null 2>&1; then
  echo 'Please install the_silver_searcher.'
  exit
fi

# Run the configuration step if not set.
 # shellcheck disable=SC2046
if [ $(ag azure ~/.aws/config | wc -l) -gt 0 ]; then
  printf "Already configured, continuing ...\n\n"
else
  printf "Use the following values when asked for input ... \n"
  printf "Azure Tenant ID: %s\n" ${AZURE_TENANT_ID}
  printf "Azure App ID URI: %s\n" ${AZURE_APP_ID_URI}
  printf "Default Role ARN: %s\n" ${AZURE_DEFAULT_ROLE_ARN}
  printf "Default Session Duration Hours: %s\n\n" ${AZURE_DEFAULT_DURATION_HOURS}
  docker run -it -it -v ~/.aws:/root/.aws sportradar/aws-azure-login --configure --profile "$AWS_PROFILE"
fi

# Perform the login.
docker run -it -it -v ~/.aws:/root/.aws sportradar/aws-azure-login --profile "$AWS_PROFILE"

printf "\nMake sure you now export your AWS_PROFILE as %s\n" "${AWS_PROFILE}"

2021-09-03

Adnan avatar
Adnan

Hi People, anyone ever had this issue with the AWS ALB Ingress controller:

failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet with at least 8 free IP Addresses for ALB. Subnets must contains these tags: 'kubernetes.io/cluster/my-cluster-name>': ['shared' or 'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See <https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details.

So there three subnets with the appropriate tagging and many ips I could not yet find the reason why it is complaining about the subnets

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Hi People, anyone ever had this issue with the AWS ALB Ingress controller:

failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet with at least 8 free IP Addresses for ALB. Subnets must contains these tags: 'kubernetes.io/cluster/my-cluster-name>': ['shared' or 'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See <https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details.

So there three subnets with the appropriate tagging and many ips I could not yet find the reason why it is complaining about the subnets

2021-09-02

Almondovar avatar
Almondovar
07:39:33 AM

Hi all, anyone has ever used the EC2 serial console connection? i am getting this message while trying to use it to all of our instances

conzymaher avatar
conzymaher

What EC2 instance type?

conzymaher avatar
conzymaher

Error message indicated you are not using a nitro based instance

conzymaher avatar
conzymaher
Amazon EC2 Instance Types - Amazon Web Services

EC2 instance types comprise of varying combinations of CPU, memory, storage, and networking capacity. This gives you the flexibility to choose an instance that best meets your needs.

Almondovar avatar
Almondovar
08:12:30 AM

Thank you Conor, if i understand well this nitro system is for a little bit more expensive ec2’s, as we use the smallest possible t2, i don’t see it in the table

conzymaher avatar
conzymaher

T3 instances are supported so you could possibly change the instance type. Also https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html is a great alternative option

AWS Systems Manager Session Manager - AWS Systems Manager

Manage instances using an auditable and secure one-click browser-based interactive shell or the AWS CLI without having to open inbound ports.

Almondovar avatar
Almondovar
08:23:55 AM

Thank you Conor, the reason that i am asking all these things is that every month as we install some python related packages our server crashes, and none of the 4 methods works EC2 Instance Connect and SSH client should work out of the box (they are unreachable, as server crashed). Session Manager should be configured by me and EC2 Serial Console says that our instance is not compatible (wants nitro)

Alex Jurkiewicz avatar
Alex Jurkiewicz

T3 should be the same price as T2. You can even use t3a which is even cheaper

conzymaher avatar
conzymaher

If the server is crashing that easily its likely undersized

Almondovar avatar
Almondovar
08:39:14 AM

the day of crash it reached 34% cpu usage but i dont see anything else weird about utilization

conzymaher avatar
conzymaher

You can see your CPU credit balance is getting very low (at that point the performance will be throttled) this is not the cause of your issue in this particular case but worth watching

conzymaher avatar
conzymaher

Its likely memory exhaustion

conzymaher avatar
conzymaher

What instance type is it exactly?

Almondovar avatar
Almondovar

t2.micro

conzymaher avatar
conzymaher

Not really suitable for most production workloads unless they are incredibly small / bursty

conzymaher avatar
conzymaher

This is an “insert credit card” fix

conzymaher avatar
conzymaher

Try a t2.medium for a while and see how it goes

Almondovar avatar
Almondovar
08:47:39 AM

what if i change to t3a.micro like alex suggested above? btw how is it possible to have double cpu but be even cheaper? is it because of the switch to AMD?

conzymaher avatar
conzymaher

I doubt it will make any difference if memory consumption is the problem

Almondovar avatar
Almondovar

aha, now i noticed that RAM status is not showing in any graphs, right?

Almondovar avatar
Almondovar
09:06:26 AM

by the way, the serial console didnt help either, only black screen appears

Almondovar avatar
Almondovar
09:06:42 AM
Almondovar avatar
Almondovar
09:07:31 AM

there is a bar going up to the right of the screen

Daniel Huesca avatar
Daniel Huesca

Hello everybody!

AWS DocumentDB related question - https://github.com/cloudposse/terraform-aws-documentdb-cluster

Can anyone please help me configure my terraform module to NOT create a new parameter group, but instead use the default one provided by AWS (or any previously created param group)? There is no mention in the docs on how to do this, only a way to pass parameters for the module to create a new one.

Alex Jurkiewicz avatar
Alex Jurkiewicz

You might need to patch the module. Why is it a problem to use a custom parameter group?

If you use the default one, applying parameter changes in future will require you to first apply a custom parameter group, which will cause downtime

Daniel Huesca avatar
Daniel Huesca

Hello Alex, These are clusters that will almost never need a parameter change. My boss is a kinda OCD about having N amount of parameter groups (and any other unnecesary resource) laying around when all the clusters (more than 30) all use the same params.

Alex Jurkiewicz avatar
Alex Jurkiewicz

that’s too bad you have an irrational boss

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

It’s helpful to have a custom param group if you ever need a custom param in the future…

Did you want to use an existing param group instead? Or simply not use a param group at all? I think expanding the module to use an existing param group and existing subnet group would be a nice feature

If you want to put in a pr, you can start here

https://github.com/cloudposse/terraform-aws-documentdb-cluster/blob/5c900d9a2eaf89457ecf86a7b96960044c5856f4/main.tf#L88

terraform-aws-documentdb-cluster/main.tf at 5c900d9a2eaf89457ecf86a7b96960044c5856f4 · cloudposse/terraform-aws-documentdb-cluster attachment image

Terraform module to provision a DocumentDB cluster on AWS - terraform-aws-documentdb-cluster/main.tf at 5c900d9a2eaf89457ecf86a7b96960044c5856f4 · cloudposse/terraform-aws-documentdb-cluster

2021-09-01

    keyboard_arrow_up