#aws (2023-05)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/


SaZaf avatar

Hi guys, I just learned about Cloudwatch. Found that it uses metrics like CPU Usage, Disk usage etc but I also noticed that EC2 also displays many metrics in it’s Monitoring section.

Question: Do we use Cloudwatch with EC2 despite EC2 already providing useful analytics/monitoring? If yes, please share the use cases.

loren avatar

the ec2 metrics are really cloudwatch metrics under the covers

Mark Owusu Ayim avatar
Mark Owusu Ayim

Very true. Unless you want to create some personal insights(dashboards) from the instances you have, all accessed from one place then it will makes more sense to get extra features from Cloudwatch such as events for actionable purposes.


Balazs Varga avatar
Balazs Varga

hello all, in aurora serverless, I see my cpucreditbalane dropped to 0 after a recovery triggered by aws. Is it counting same as ec2 T instances? https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/unlimited-mode-examples.html

Unlimited mode examples - Amazon Elastic Compute Cloud

The following examples explain credit use for instances that are configured as unlimited .

Paula Cartas avatar
Paula Cartas

Hi! im using this module https://registry.terraform.io/modules/cloudposse/ecs-alb-service-task/aws/0.68.0 in version 0.66.2 im trying to update to the latest version because everytime i change an environment variable i have to delete the service and recreate it because it doesnt take the latest task definition (generated by codepipeline) to create the new one. i tryied using redeploy_on_apply but i couldnt find any configuration which correctly takes the latest. My configuration looks like this:

module "ecs_alb_service_task" {
  source  = "cloudposse/ecs-alb-service-task/aws"
  version = "0.66.2"
  namespace                          = var.cluster_name
  stage                              = module.global_settings.environment
  name                               = local.project_name
  attributes                         = []
  container_definition_json          = module.container_definition.sensitive_json_map_encoded_list

  #Load Balancer
  alb_security_group                 = var.security_group_id
  ecs_load_balancers                 = local.ecs_load_balancer_internal_config

  #Capacity Provider Strategy 
  capacity_provider_strategies       = var.capacity_provider_strategies
  desired_count                      = 1
  ignore_changes_desired_count       = true
  launch_type                        = module.global_settings.default_ecs_launch_type

  vpc_id                             = var.vpc_id
  subnet_ids                         = var.subnet_ids
  assign_public_ip                   = module.global_settings.default_assign_public_ip
  network_mode                       = "awsvpc"

  ecs_cluster_arn                    = var.cluster_arn
  security_group_ids                 = [var.security_group_id]
  ignore_changes_task_definition     = true
  force_new_deployment               = true
  health_check_grace_period_seconds  = 200
  deployment_minimum_healthy_percent = module.global_settings.default_deployment_minimum_healthy_percent
  deployment_maximum_percent         = module.global_settings.default_deployment_maximum_percent
  deployment_controller_type         = module.global_settings.default_deployment_controller_type
  task_memory                        = local.task_memory
  task_cpu                           = local.task_cpu
  ordered_placement_strategy         = local.ordered_placement_strategy

  label_order                        = local.label_order
  labels_as_tags                     = local.labels_as_tags
  propagate_tags                     = local.propagate_tags
  tags                               = merge(var.tags, local.tags)

  #ECS Service
  task_exec_role_arn                 = [module.task_excecution_role.task_excecution_role_arn]
  task_role_arn                      = [module.task_excecution_role.task_excecution_role_arn]

  depends_on = [

any suggestions?

José avatar

You are basically saying ignore_changes_task_definition = true meaning, don’t respect the future updates. It should be false.

Paula Cartas avatar
Paula Cartas

When i activate that option it tries to delete the service and recreate it with an older version of the task definition

José avatar

Then your problem is not the task definition, since it suppose to use the latest version. It’s somewhere else. I don’t see the redeploy_on_apply usage which fulfill such purpose.

Paula Cartas avatar
Paula Cartas

Im currently modifying my original module (in version 0.66.2) such as i showed before. I guessed that redeploy_on_apply of the latest version of this module would take the latest version of the task definition and update it in the state file. Im not sure whats wrong

José avatar

one thing is different than the other. redeploy on aply does not update the module version itself, but deploy a new task with version if is detected in the cluster. 2 different things.

Paula Cartas avatar
Paula Cartas

nono i know, but even if i upgrade the version and activate that option the task definition is not the latest. The picture i sended before is from 0.66.2 version with ignore_changes_task_definition = false. The next pictures are from v0.68 & redeploy_on_apply = true and ignore_changes_task_definition_false

Paula Cartas avatar
Paula Cartas

i tryied with diferent configurations of ignore_changes_task_definition, force_new_deployment and redeploy_on_apply and no one is working

Fizz avatar

If you change ‘ignore_changes_task_definition’ from true to false you should expect the service to be destroyed the first time due to the way the service is coded in the module. A second run should not require a destroy.

Fizz avatar

The reason it picks up an older version of v your task definition is because that is all terraform knows about. You have updated the task definition outside of terraform in code pipeline

Fizz avatar

If you are going to manage revisions is codepipeline you could pass in the correct task definition and version in the variable var.task_definition


Bart Coddens avatar
Bart Coddens

I have a customer that has a huge oracle database: 120 TB, the limit on RDS is 64 TB, any suggestions ?

Fizz avatar

Sharding or self hosting on ec2

Hugo Samayoa avatar
Hugo Samayoa

Also talk to an AWS rep. You might get some free credits for moving such a large dataset. They would also give you some advice on your current issue

jsreed avatar

Aws will give your customer free credits and help cover costs on converting out of oracle for your customer… talk to the TAM

jsreed avatar

Otherwise ec2 self host or sharding

Balazs Varga avatar
Balazs Varga

is aurora serverless v1 HA compatible ?

ccastrapel avatar

Hi there, I wrote a blog post that y’all may be interested in. It discusses how to manage cross-account AWS IAM permissions for different teams with an open-source Python tool called IAMbic. Would love feedback!


Noq: AWS Permission Bouncers: Letting Loose in Dev, Keeping it Tight in Prodattachment image

Ever had a slight configuration change take down production services? Wish you could give teams more AWS permissions in dev/test accounts, but less in production? Right sizing IAM policies for each team and account can be a tedious task, especially as your environment grows. In this post, we’ll explore how IAMbic brings order to multi-account AWS IAM chaos.


Alex Atkinson avatar
Alex Atkinson

For AWS Identity center, is there a way to see which accounts a group has access to via the cli? There’s no way in the console afaict.

Soren Jensen avatar
Soren Jensen

Not as far as I know It’s such a missing feature

Alex Atkinson avatar
Alex Atkinson

OK. Was just making sure I didn’t just miss it somehow.


Matt Gowie avatar
Matt Gowie

Does anyone know of any tools that will scan a set of AWS accounts for best practices? Any that are recommended? My company has a list of 40+ best practices that we’ve identified and I’m looking for solutions to quickly check these best practices against a set of accounts or AWS organization.

bradym avatar

I haven’t used it myself yet, but I think https://github.com/cloud-custodian/cloud-custodian sounds like what you’re looking for.


Rules engine for cloud security, cost optimization, and governance, DSL in yaml for policies to query, filter, and take actions on resources

loren avatar

this is a nice project maintaining a list of various tools… https://github.com/toniblyx/my-arsenal-of-aws-security-tools


List of open source tools for AWS security: defensive, offensive, auditing, DFIR, etc.

loren avatar

if i were to start with just one tool for checking against “best practices”, it would probably be prowler https://github.com/prowler-cloud/prowler


Prowler is an Open Source Security tool for AWS, Azure and GCP to perform Cloud Security best practices assessments, audits, incident response, compliance, continuous monitoring, hardening and forensics readiness. It contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.

loren avatar

ElectricEye is another great one… https://github.com/jonrau1/ElectricEye


ElectricEye is a multi-cloud, multi-SaaS Python CLI tool for Cloud Asset Management (CAM), Cloud Security Posture Management (CSPM), SaaS Security Posture Management (SSPM), and External Attack Surface Management (EASM) supporting 100s of services and evaluations to harden your public cloud & SaaS environments.

Hao Wang avatar
Hao Wang

yeah custodian is a good one

Hao Wang avatar
Hao Wang

the others are also interesting projects, thanks

Matt Gowie avatar
Matt Gowie

Good stuff – Thank you folks.

Sudhish KR avatar
Sudhish KR

If you are looking for a SaaS solution - I would go with Aqua security … they bought a compnay called CloudSploit a few years ago, and they have a good level of reporting/remediation steps for issues that are detected.