#aws (2022-07)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2022-07-01

Karim Benjelloun avatar
Karim Benjelloun

Hello, do you have any recommendations on unified (or SSO) for SSH? We do not want to manually copy-delete ssh keys all over our EC2 instances.

Rodrigo Rech avatar
Rodrigo Rech

If you are using AWS, why not use AWS SSO with permission set in combination with AWS Session Manager?

Karim Benjelloun avatar
Karim Benjelloun

I’m not sure I understand you

Rodrigo Rech avatar
Rodrigo Rech
AWS Systems Manager Session Manager - AWS Systems Manager

Manage your nodes using an auditable and secure one-click browser-based interactive shell or the AWS CLI without having to open inbound ports.

Rodrigo Rech avatar
Rodrigo Rech

With System Manager Agent, you don’t need to manage SSH Keys or open inbound ports to your machines

Karim Benjelloun avatar
Karim Benjelloun

That sounds good! Thanks, I’m gonna have a look

Rodrigo Rech avatar
Rodrigo Rech

And if you want to leverage with SSO, you can use AWS SSO

Rodrigo Rech avatar
Rodrigo Rech
What is AWS Single Sign-On? - AWS Single Sign-On

AWS Single Sign-On is a cloud-based single sign-on (SSO) service that makes it easy to centrally manage SSO access to all of your AWS accounts and cloud applications. Specifically, it helps you manage SSO access and user permissions across all your AWS accounts in AWS Organizations. AWS SSO also helps you manage access and permissions to commonly used third-party software as a service (SaaS) applications, AWS SSO-integrated applications as well as custom applications that support Security Assertion Markup Language (SAML) 2.0. AWS SSO includes a user portal where your end-users can find and access all their assigned AWS accounts, cloud applications, and custom applications in one place.

Rodrigo Rech avatar
Rodrigo Rech

In my company we use Session Manager + AWS SSO + Okta (as IdP)

Rodrigo Rech avatar
Rodrigo Rech

works like a charm

Rodrigo Rech avatar
Rodrigo Rech

the downside: only works at AWS

Karim Benjelloun avatar
Karim Benjelloun

glad to hear

Karim Benjelloun avatar
Karim Benjelloun

I mean that should be OK

Karim Benjelloun avatar
Karim Benjelloun

We have around 100 servers, and we’re done with copying ssh keys back and forth. So we need something simple

Rodrigo Rech avatar
Rodrigo Rech

yep, I hear your pain

Rodrigo Rech avatar
Rodrigo Rech

you basically need to install AWS Session Manager, create some policies to start using it

Karim Benjelloun avatar
Karim Benjelloun

it works!

1
Karim Benjelloun avatar
Karim Benjelloun

thanks Rodrigo

1
1
mikesew avatar
mikesew

@Rodrigo Rech: slightly piggybacking on this thread, can you assign AWS SSO to AD groups? We use AWS SSO with Azure AD, but i’m being told by my platform administrator that we cant do nested groups, and thus we have to enter AWS account/role access per user .

Rodrigo Rech avatar
Rodrigo Rech

Hi @mikesew not sure if I got your use case

Rodrigo Rech avatar
Rodrigo Rech

On Okta I assign AWS SSO to specific users groups

Rodrigo Rech avatar
Rodrigo Rech

On AWS side I grant a Permission Set base on the user group

Rodrigo Rech avatar
Rodrigo Rech

I didn’t get it what do you mean by nested groups

Rodrigo Rech avatar
Rodrigo Rech

If need more granular control, you could look for ABAC (Attribute-based access control) https://docs.aws.amazon.com/singlesignon/latest/userguide/abac.html

Attribute-based access control - AWS Single Sign-On

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes. You can use AWS SSO to manage access to your AWS resources across multiple AWS accounts using user attributes that come from any AWS SSO identity source. In AWS, these attributes are called tags. Using user attributes as tags in AWS helps you simplify the process of creating fine-grained permissions in AWS and ensures that your workforce gets access only to the AWS resources with matching tags.

mikesew avatar
mikesew

Org Had a set of basic roles ie. AWS administrator access , AWSReadOnlyAccess. We could assign human ad users to thise roles but apparently not ad groups. Puzzling.

Rodrigo Rech avatar
Rodrigo Rech

From what I understand @mikesew, it’s possible

Rodrigo Rech avatar
Rodrigo Rech

What happen on my company:

• We manage users and groups at Okta side.

• On the AWS side, we have different Permission Sets, which includes different IAM Policies

• Based on the AWS Account and User Group, we assign this permission set. You can also assign multiple permission sets to a single group within same account if you wish.

1
mikesew avatar
mikesew

thank you. Okta doesn’t have a free tier, does it? I want to try for my own learning, to setup an aws org, setup some free directory service like.. azure AD, setup AWS SSO to use that azure AD SAML, then see if i can add nested groups to those roles.. not just users.

• Q: are there any other free tier directory services i can try in a sandbox? my AWS free tier account expired.. I could try jumpcloud, but was hoping to use something closer to my company (azure AD)

Andrey Taranik avatar
Andrey Taranik
Teleport: Easiest, most secure way to access infrastructure | Teleportattachment image

The open-source Teleport Access Plane consolidates connectivity, authentication, authorization, and audit into a single platform to improve security & agility.

2
Karim Benjelloun avatar
Karim Benjelloun

Thanks @Andrey Taranik. Any alternatives? We tried Teleport but for some reason we keep needing restarting services every now and then and I don’t find it straightforward

Michael Galey avatar
Michael Galey

I like cloudflare tunnels a lot if you use cloudflare.

Andrey Taranik avatar
Andrey Taranik
Smallstep SSHattachment image

Smallstep SSH provides single sign-on SSH via your identity provider—replacing key management agony with secure, short-lived SSH certificates.

Andrey Taranik avatar
Andrey Taranik

or just build your own solution as described in smallstep blog https://smallstep.com/blog/diy-single-sign-on-for-ssh/

DIY Single Sign-On for SSHattachment image

Let’s set up Google SSO for SSH! We’ll use OpenID Connect (OIDC), SSH certificates, a clever SSH configuration tweak, and Smallstep’s open source packages.

Karim Benjelloun avatar
Karim Benjelloun

Thanks Andrey! I’m gonna have a look

DIY Single Sign-On for SSHattachment image

Let’s set up Google SSO for SSH! We’ll use OpenID Connect (OIDC), SSH certificates, a clever SSH configuration tweak, and Smallstep’s open source packages.

loren avatar
Introducing Tailscale SSHattachment image

Today we’re delighted to introduce Tailscale SSH, to more easily manage SSH connections in your tailnet. Tailscale SSH allows you to establish SSH connections between devices in your Tailscale network, as authorized by your access controls, without managing SSH keys, and authenticates your SSH connection using WireGuard®.

Soren Jensen avatar
Soren Jensen

Does anyone know if it’s possible to see how much electricity your aws resources are consuming? Alternatively find a co2 footprint of the resources?

2
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)
New – Customer Carbon Footprint Tool | Amazon Web Servicesattachment image

Carbon is the fourth-most abundant element in the universe, and is also a primary component of all known life on Earth. When combined with oxygen it creates carbon dioxide (CO2). Many industrial activities, including the burning of fossil fuels such as coal and oil, release CO2 into the atmosphere and cause climate change. As part […]

Soren Jensen avatar
Soren Jensen

Thanks, I was sure I had seen an article about it somewhere..

2022-07-04

jonjitsu avatar
jonjitsu

Anyone have any flakiness issues with codedeploy? I have a lot of services using it and when I trigger too many codedeploys at once it seems the whole thing just freezes but not always, it’s weird.

Stephen Tan avatar
Stephen Tan

I use Code Deploy but on a small scale. Works really well and very happy with it - nothing to maintain and pretty much free. My use case is to get it to pull repos from Github and trigger ansible runs so you can get it to do pretty much anything that Tower does but in a proper way - securely, pulling rather than pushing etc. It’s pretty sweet

2022-07-05

Adnan avatar

I am currently at at EKS version 1.20. Do you know if there is a deadline for upgrading this version?

Stef avatar

End of support for 1.20 is 03 Oct 2022

Alex Jurkiewicz avatar
Alex Jurkiewicz
Amazon EKS Kubernetes versions - Amazon EKS

The Kubernetes project is continually integrating new features, design updates, and bug fixes. The community releases new Kubernetes minor versions, such as 1.22 . New version updates are available on average every three months. Each minor version is supported for approximately twelve months after it’s first released.

1
Adnan avatar

thanks

Constantin Popa avatar
Constantin Popa

Here’s the response I got from AWS support with regards to my concerns of not being able to upgrade our EKS cluster in time, before the End of support date, hope it helps people to plan the upgrade accordingly:

Query 1 : What is the impact of running our EKS with K8s 1.19 after June 2022 ? According to the docs 1.19 will be unsupported past that date.


* From [1] it is stated that On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version.
* Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date.
* After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes.
* In this regard you might have a query of when the control plane takes place exactly ?
* In the same document [1] Amazon EKS can't provide specific timeframes. Automatic updates can happen at any time after the end of support date.
* We recommend that you proactively update your control plane without relying on the Amazon EKS automatic update process.


==================

==================
Query 2 : Will the cluster be shut down after that date?


* No your cluster won't be shut down from the AWS end.
* However as mentioned above the control plane of the EKS cluster will be upgraded any time after the end of support date.

==================

==================
Query 3 : Won't we be able to add new nodes?

* After the end of support date you won't be able to add new nodes with the expired version since the support for that version will be expired.
* Also you won't be able to create new EKS cluster with the version for which the support has ended.

==================

==================
Query 4 : What is the actual date for end of support, 1st of june or 30th of June ?

* The date for the Amazon EKS end of support for Kubernetes version of 1.19 is June 30, 2022
* You can check the Amazon EKS Kubernetes release calendar [2] for the same

==================

As you have mentioned that the upgrade might not be completed within the end of support date for EKS 1.19 ( June 30th 2022 ) I recommend you to provide the EKScluster,region along with the business justification as to why the support should be extended so that I can reach out to the EKS Service Team and create a request for extension of support on your behalf for your EKS cluster for 1.19 version.
Yoav Maman avatar
Yoav Maman

I’m having a trouble finding an answer on AWS docs, Anyone happens to know whether it’s possible to configure an Application Load Balancer to accept requests only from API gateway?

Alex Jurkiewicz avatar
Alex Jurkiewicz

This sort of question is great for AWS support. It’s a simple closed technical query

jsreed avatar

You could use a WAF in front of the ALB to check for the API GW http header and then allow access based on that header response

Milan Simonovic avatar
Milan Simonovic

but a malicious user can also manually set that http header right?

Milan Simonovic avatar
Milan Simonovic

what’s the use case of having both actually? dont they do the same job?

2022-07-06

yi gong avatar
yi gong

I meet an error when using terraform-aws-eks-cluster. Error: Invalid count argument on .terraform/modules/eks/main.tf line 34, in resource “aws_kms_key” “cluster”: 34: count = local.enabled && var.cluster_encryption_config_enabled && var.cluster_encryption_config_kms_key_id == “” ? 1 : 0

sohaibahmed98 avatar
sohaibahmed98
bridgecrewio/AirIAM

Least privilege AWS IAM Terraformer

tor avatar

Hey I was wondering where I could find documentation for all the arguments s3_replication_rules accepts for https://github.com/cloudposse/terraform-aws-s3-bucket#input_s3_replication_rules ?

RB avatar
  dynamic "rule" {
    for_each = local.s3_replication_rules == null ? [] : local.s3_replication_rules

    content {
      id       = rule.value.id
      priority = try(rule.value.priority, 0)

      # `prefix` at this level is a V1 feature, replaced in V2 with the filter block.
      # `prefix` conflicts with `filter`, and for multiple destinations, a filter block
      # is required even if it empty, so we always implement `prefix` as a filter.
      # OBSOLETE: prefix   = try(rule.value.prefix, null)
      status = try(rule.value.status, null)

      # This is only relevant when "filter" is used
      delete_marker_replication {
        status = try(rule.value.delete_marker_replication_status, "Disabled")
      }

      destination {
        # Prefer newer system of specifying bucket in rule, but maintain backward compatibility with
        # s3_replica_bucket_arn to specify single destination for all rules
        bucket        = try(length(rule.value.destination_bucket), 0) > 0 ? rule.value.destination_bucket : var.s3_replica_bucket_arn
        storage_class = try(rule.value.destination.storage_class, "STANDARD")

        dynamic "encryption_configuration" {
          for_each = try(rule.value.destination.replica_kms_key_id, null) != null ? [1] : []

          content {
            replica_kms_key_id = try(rule.value.destination.replica_kms_key_id, null)
          }
        }

        account = try(rule.value.destination.account_id, null)

        # <https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-5.html>
        dynamic "metrics" {
          for_each = try(rule.value.destination.metrics.status, "") == "Enabled" ? [1] : []

          content {
            status = "Enabled"
            event_threshold {
              # Minutes can only have 15 as a valid value.
              minutes = 15
            }
          }
        }

        # This block is required when replication metrics are enabled.
        dynamic "replication_time" {
          for_each = try(rule.value.destination.metrics.status, "") == "Enabled" ? [1] : []

          content {
            status = "Enabled"
            time {
              # Minutes can only have 15 as a valid value.
              minutes = 15
            }
          }
        }

        dynamic "access_control_translation" {
          for_each = try(rule.value.destination.access_control_translation.owner, null) == null ? [] : [rule.value.destination.access_control_translation.owner]

          content {
            owner = access_control_translation.value
          }
        }
      }

      dynamic "source_selection_criteria" {
        for_each = try(rule.value.source_selection_criteria.sse_kms_encrypted_objects.enabled, null) == null ? [] : [rule.value.source_selection_criteria.sse_kms_encrypted_objects.enabled]

        content {
          sse_kms_encrypted_objects {
            status = source_selection_criteria.value
          }
        }
      }

      # Replication to multiple destination buckets requires that priority is specified in the rules object.
      # If the corresponding rule requires no filter, an empty configuration block filter {} must be specified.
      # See <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket>
      dynamic "filter" {
        for_each = try(rule.value.filter, null) == null ? [{ prefix = null, tags = {} }] : [rule.value.filter]

        content {
          prefix = try(filter.value.prefix, try(rule.value.prefix, null))
          dynamic "tag" {
            for_each = try(filter.value.tags, {})

            content {
              key   = tag.key
              value = tag.value
            }
          }
        }
      }
    }
  }
RB avatar

id, priority, status, etc are keys

RB avatar

basically anything in aws_s3_bucket_replication_configuration’s rule

RB avatar

reason for list(any) is because terraform hasn’t upgraded to optional map keys as a thing until terraform 1.3 which is not out of beta yet

tor avatar

Thanks for the pointer. I appreciate it.

1
RB avatar

but it could be still be documented in the respective variable’s description. perhaps we need to get better at that

tor avatar

The documentation just says a list(any)

2022-07-07

Saleem Clarke avatar
Saleem Clarke

Anyone know how to have session manager sit behind OpenVPN, so user is required to connect to OpenVPN before a session can be started

2022-07-08

2022-07-09

Saleem Clarke avatar
Saleem Clarke

this should be AWS CLI not via console

2022-07-10

idan levi avatar
idan levi

Hey all! im trying to install your Aws-ebs-csi-driver by that guide https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html created all the roles and policies. on a quick look at the ebs-csi-node pod at my k8s env i can see that i get that error from ebs-plugin container :

`I0628 10:44:05.130666 1 metadata.go:85] retrieving instance data from ec2 metadata
I0628 10:44:05.135264 1 metadata.go:92] ec2 metadata is available
panic: could not get number of attached ENIs
goroutine 1 [running]:
github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver.newNodeService(0xc0000c6f00)
/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/node.go:86 +0x269
github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver.NewDriver({0xc000609f30, 0x8, 0x55})
/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/driver.go:95 +0x38e
main.main()
/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/cmd/main.go:46 +0x365

` im using v1.7.0-eksbuild.0 deriver version and 1.20 k8s version. do you now how can i solve it ? Thanks !

Creating the Amazon EBS CSI driver IAM role for service accounts - Amazon EKS

The Amazon EBS CSI plugin requires IAM permissions to make calls to AWS APIs on your behalf.

Alex Jurkiewicz avatar
Alex Jurkiewicz

this is not an official AWS suport channel

Creating the Amazon EBS CSI driver IAM role for service accounts - Amazon EKS

The Amazon EBS CSI plugin requires IAM permissions to make calls to AWS APIs on your behalf.

Alex Jurkiewicz avatar
Alex Jurkiewicz

given “panic: could not get number of attached ENIs”, I suspect the permissions you’ve attached to the role are incomplete

RO avatar

Is here the right channel for questions regarding cloud formation ?

1

2022-07-11

Karim Benjelloun avatar
Karim Benjelloun

Question. Is it more common to do VPC Peering Connections with vendors of managed services such as Databases? Or is it more common to do VPC PrivateLink & Endpoints?

loren avatar

I’d have an architectural preference for private link, personally, I think. Not sure what would be more “common” though

1
1
jsreed avatar

VPC peering was more common as private link wasn’t available. Private link is much the preferred method as VPC peering is much more intended for using across your orgs crosx account comms

1
Taz avatar

I have been asked to move 2 .Net Core applications that are running as apps on Azure App Services to AWS. What is the best method to deploy these apps Would the deployment need a beanstalk per .Net CORE App or would this option be more suitable. I am after the quickest solution!! https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/deployment-beanstalk-multiple-application.html

2022-07-12

Frank avatar

Hi all!

We have an Aurora Serverless PostgreSQL instance in a private subnet. Now our customer wants to connect an externally hosted application (on Azure) to it. As a temporary fix we have manually created a new “Regional Cluster” (non-serverless) based off a snapshot of the Serverless DB Cluster, gave it a public IP and firewalled it to the customer-provided subnets.

However, they would need more up-to-date information. I would like to prevent deleting the old DB regional cluster + re-creating it on a daily/weekly basis since it would give a new IP every time.

For this I am currently leaning toward setting up a fresh (public, firewalled, SSL-enforeced) DB and using DMS to sync the databases so that the external party always has access to the most recent data without needing access to the actual DB instance. The snapshot alone is 76GiB and the sync should be done outside of office hours, which makes it a bit more tricky.

Would this be a good approach or are there better/easier alternatives? Thanks!

Alex Jurkiewicz avatar
Alex Jurkiewicz

consider:

  1. Moving the original database. This will be cheaper than running two copies of the DB
  2. Peer the Azure VPC with a VPN
this1
Frank avatar

For performance (and security) reasons I would prefer having a separate instance for them though

Alanis Swanepoel avatar
Alanis Swanepoel

I don’t disagree with having a separate instance, but would still advise haviit vpc locked, and accessible over a vpn.

Frank avatar

Agreed. But the only way to properly automate migrating data (without having it change IP’s every time with a snapshot restore) would be using DMS? Or is there an easier approach?

Alex Jurkiewicz avatar
Alex Jurkiewicz

well… is a provisioned replica supported by aurora serverless?

Frank avatar
Using Amazon Aurora Serverless v1 - Amazon Aurora

Work with Amazon Aurora Serverless v1, an on-demand, autoscaling configuration for Amazon Aurora.

Rodrigo Rech avatar
Rodrigo Rech

I don’t have the full context here, but it seems you are trying to fix an architecture issue with an infrastructure solution. From an application/architecture perspective, this Azure application should directly access an API, not the DB. Each service/app should have its own database and expose its data using some API/integration mechanism. Sharing it across multiple services/apps will make your data governance, security, compliance, and operations chaotic in the long run.

1

2022-07-13

Kevin H avatar
Kevin H

A friend of mine is the CEO of this early-stage startup and asked that I share it around, in case anyone finds it interesting: https://www.usage.ai/

Usage AI | save 57% on AWSattachment image

Usage AI’s automated cloud management tools help companies save time and money in the cloud.

Alex Jurkiewicz avatar
Alex Jurkiewicz

truly a fulsome endorsement

Usage AI | save 57% on AWSattachment image

Usage AI’s automated cloud management tools help companies save time and money in the cloud.

Darren Cunningham avatar
Darren Cunningham

love that pricing model

Matt McLane avatar
Matt McLane

Very interesting

2022-07-14

nobodyreally needstoknow avatar
nobodyreally needstoknow

Hi I am trying to incorporate congnito with an ALB, but i am getting this error when creating the listener rule and with “client_credentials” oauth flow:

│ Error: Error creating LB Listener Rule: InvalidLoadBalancerAction: The authorization code grant OAuth flow must be enabled in the user pool client

I don’t understand why client credentials does not work with the ALB.

Alex Mills avatar
Alex Mills

have you configured a hosted UI for the app client? it’s a requirement for Oauth flow

nobodyreally needstoknow avatar
nobodyreally needstoknow

yes I have

nobodyreally needstoknow avatar
nobodyreally needstoknow

when i click the hosted ui link it says An error was encountered with the requested page.

nobodyreally needstoknow avatar
nobodyreally needstoknow

“unauthorized client”

nobodyreally needstoknow avatar
nobodyreally needstoknow
Error: Error creating LB Listener Rule: InvalidLoadBalancerAction: The authorization code grant OAuth flow must be enabled in the user pool client
nobodyreally needstoknow avatar
nobodyreally needstoknow

If I set the Cognito user pool client allowed OAuth flow to “Authorization code grant” and create the load balancer, it will get created. I can then after change it to “client credentials”.

2022-07-15

Balazs Varga avatar
Balazs Varga

hello all, Is there a way to renew a cert where we have only private hosted zone? I cannot access the main public domain so cannot do my trick to point it to a new public zone until cert will be renewed.

jsreed avatar

Not sure I understand your ask… you have a public domain cert that you want to renew via a private CA server?

jsreed avatar

If that is the case, no that’s not possible. Highly recommend using Let’s Encrypt, it provides auto renewing free public domain Certificates. https://letsencrypt.org/

Let's Encrypt

Let’s Encrypt is a free, automated, and open certificate authority brought to you by the nonprofit Internet Security Research Group (ISRG).

Eric Villa avatar
Eric Villa

Hi! Is there anyone who is going to the AWS reInforce?

2022-07-18

Desire BANSE avatar
Desire BANSE

Hello all. Is there a way to programmatically upgrade the Kubernetes version of an EKS cluster (on AWS) ?

Desire BANSE avatar
Desire BANSE

Thank you! That is great.

Victor Grenu avatar
Victor Grenu

AWS Security Digest Newsletter #79 is out!

Fourteen AWS Security Best Practices in IAM [VIDEO] Speeding Up AWS IAM Least Privileges Open-source proof-of-concept client for IAM Roles Anywhere

Read more: https://asd.zoph.io

AWS Security Digestattachment image

AWS Security Digest Weekly Newsletter. Curated by Victor GRENU.

2022-07-19

DaniC (he/him) avatar
DaniC (he/him)

hi folks, am trying to get a feeling of what sort of solutions/ approaches you took to manage/ adhere to IAM least privilege principle ?

I find that the granular we go the higher the cost is to manage it for various users. AWS Managed policies are too “open” and then when you combine that across various accounts is getting even harder.

If going with AWS SSO then we need to work on permission sets and the main prob around manage them is still there imo.

1
carlos.clemente avatar
carlos.clemente

are you using RBAC or ABAC?

DaniC (he/him) avatar
DaniC (he/him)

I have a mix of

• IAM users mapped to groups ( custom/ inline/ managed policies)

• IAM users with direct attached managed policies

• AWS SSO users mapped to permissions sets

However the q is not what i have as i know is rubbish, the q is how other folks managed to adhere to least privilege principle and managed in a “sane” way w/o too much operational overhead . Especially when in prod accounts not everyone has the same privileges

jsreed avatar

I would say the most “typical” or common approach you will see is user management via Active Directory/Azure AD –> AWS SSO ( or other Identity provider ala: centrify) –> Control Tower –> IAM Roles –> Services. To that end, you have your users in AD, and you create groups for those users, you then map those groups to Roles in IAM, and the Roles define access to services. Using Control tower to centrally manage the Roles/Policies for your account access. More into the Grit: What I have done for simplicities sake is to have basically 3 roles per service that your org will be using, Admin Role, User Role, Read Only Role.

jsreed avatar

Then it becomes easy to assign Matching groups of users from AD to one of the 3 roles in any given service. The advantage is that you keep all your user management inside AD, and it makes it easy to assign users or groups of users to services, and take away that access just as easily, without having to go in and check every AWS service and Role you have created otherwise.

jsreed avatar

Also get the advantage of using nested grouping in AD to assign a single person to multiple roles, could be an admin for one service, and RO for another

jsreed avatar

Also a really good idea to leverage Service catalog along with this, so based on Group/Role Assignment the users are presented with a list of services they have access to, and nothing they dont.

DaniC (he/him) avatar
DaniC (he/him)

thanks for detailed info @jsreed !
Using Control tower to centrally manage the Roles/Policies for your account access.
is this version controlled somehow or is UI driven ?

DaniC (he/him) avatar
DaniC (he/him)

For audit reasons folks where i currently work had to log every change via Jira and then i came and added TF but that is getting out of control as you end up providing a “managed” policy or a role which covers “a lot” to not spend tons of time on this requests . And the whole process is also odd as after Jira ticket raised and approved, folks or “a monkey” needs to raise a PR to update the tfvars file , not so smooth process

2022-07-21

Adnan avatar

Did anybody experience latencies with one service calling a service in another subnet/AZ?

I have an issue where an app/pod running in EKS, is responding much faster to requests when running in a specific subnet/AZ compared to running in the other subnets/AZ’s.

The only obvious characteristic of the “fast” subnet/AZ is that a Elasticache/Redis is running in it which the app is heavily using.

Ideas about how to debug this?

Alex Jurkiewicz avatar
Alex Jurkiewicz

inter-AZ latency (~1-2ms) is higher than intra-AZ latency (<1ms). Is that your question?

Alex Jurkiewicz avatar
Alex Jurkiewicz

if one service talks to another service, that will be faster if they are colocated in the same AZ

Adnan avatar

To be more specific. In the AZ where Redis is colocated, the app responds to a request 50%+ faster than in the other AZ’s

example:

eu-central-1b

0m0.546s
0m0.537s
0m0.567s

other az's

0m1.141s
0m1.312s
0m1.299s
Alex Jurkiewicz avatar
Alex Jurkiewicz

You’re going to need some more specific numbers than overall latency from an internal app

Tsu Wei Quan avatar
Tsu Wei Quan

Hello team! I require some advice/help on this.

I just deployed terraform-aws-elasticsearch (7.10) from this module https://github.com/cloudposse/terraform-aws-elasticsearch via terraform. Then via the aws console i updated my ES cluster to opensearch v1.2.

Now i wonder if my terraform code would be synced to the changes? i believe it will not be sync. Can i still use this same module for opensearch??

cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash.

Alex Jurkiewicz avatar
Alex Jurkiewicz

with aws_elasticsearch_domain, you can specify opensearch versions https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticsearch_domain

cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash.

Milan Simonovic avatar
Milan Simonovic

i did the same, should work fine if you change tf to:

  elasticsearch_version = "OpenSearch_1.2"
Milan Simonovic avatar
Milan Simonovic

and then just refresh state

Tsu Wei Quan avatar
Tsu Wei Quan

yup it works! thank you guys!!!

2022-07-22

2022-07-27

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’d be interested to hear if anyone else has tried to have private hosted zones for services within a given region but also want to have a public hosted zone that can point to the active regional resource. I’m trying to figure out of there’s an automatic way to go about this or if I need to look at crafting something external

Jim Park avatar
Jim Park

Could it be broken out as two independent concerns? For example, you could have private service discovery using Cloud Map and App Mesh, which is in effect private DNS. On the backend, whatever services present themselves with the appropriate envoy container get the traffic. On the frontend, regional load balancers can represent their geographic regions using geolocation routing in route53.

2022-07-28

Yordon Smith avatar
Yordon Smith

Hey Everyone.. Wondering a easy yet effective way to do a bulk deletion of EBS snapshots. Got a list (thousands) of older snapshots to be cleaned. Appreciate your inputs.

Josh B. avatar
Josh B.

I think I saw some lambda functions that did this; also, you can use the EC2 Lifecycle Manager. I am not sure if it will only clean up going forward or if it’s retro. Maybe something like this https://medium.com/nerd-for-tech/ebs-snapshot-management-using-aws-lambda-and-cloudwatch-d961fdbe3772

Yordon Smith avatar
Yordon Smith

Ya, the life cycle manager doesn’t seem to be useful for the existing snapshots.

Denis avatar

Were these snapshots created manually?

Balazs Varga avatar
Balazs Varga
- name: "Delete snapshots for {{ domain }} cluster"
  block:
    - name: Set the retry count
      set_fact:
        retry_count: "{{ 0 if retry_count is undefined else retry_count|int + 1 }}"

    - name: "Get all remaining snapshots"
      raw: "aws ec2 describe-snapshots --cli-read-timeout 300 --filters Name=tag:kubernetes.io/cluster/{{ domain }},Values=owned --region {{ region }} | jq -r '.Snapshots | .[] | .SnapshotId'"
      register: snapshots_to_delete

    - name: Delete snapshots
      raw: "aws ec2 delete-snapshot --snapshot-id {{ item }} --region {{ region }}"
      loop: "{{ snapshots_to_delete.stdout_lines }}"
  rescue:
    - fail:
        msg: Ended after 5 retries
      when: retry_count|int == 5

    - include_tasks: delete_snapshots.yaml
  tags:
    - delete-cluster
    - delete-snapshots
    - skip-delete-cluster

I use this with ansible…

1
1
1
Yordon Smith avatar
Yordon Smith

@Denis Unfortunately Yes, it seems to be. Not sure if they were using any other tool. But, certainly they are not through AWS Backup (or) AWS DLM.

Denis avatar

if you have a tag based on which you can filter which exactly to delete, then a simple aws cli call or a small sdk app can do the trick.

Yordon Smith avatar
Yordon Smith

@Denis Now, that’s another challenge as none of them have tags. I am getting them added and segregate a list (which needs to be cleaned off and which is not) using AWS-Tagger.

But, I don’t see a way of deleting snapshots by passing a file as input, read and delete in bulk using CLI??

Denis avatar

no you’ll need a for cycle, like this one for example

Denis avatar

or you can run one command that list the ebs IDs based on the tag or whatever, and pipe that output into that for cycle

Yordon Smith avatar
Yordon Smith

@Denis @Balazs Varga Great. Thanks for your inputs. Let me try that and will let you know how it goes.

2
Balazs Varga avatar
Balazs Varga

do you know anything about xcurrent issue ? any eta to solve ? all clusters in OHIO are down

Josh B. avatar
Josh B.

Mine seems to have come back up for now, lol. Datadog ddos’d my voicemail though.

Balazs Varga avatar
Balazs Varga

it comes back and goes down… flacky…

Josh B. avatar
Josh B.

Yeah, for sure.

Balazs Varga avatar
Balazs Varga

they wrote only 1 az affected, but I saw errors on all AZ-s

Josh B. avatar
Josh B.

Yeah, it was def all of them, even if it was brief. I literally saw all of my AZ’s go down lol

1

2022-07-29

karandeep singh avatar
karandeep singh

wave Hello, team! https://github.com/cloudposse/terraform-aws-emr-cluster Does this module support instance fleet?

cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS

    keyboard_arrow_up