#aws (2022-02)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2022-02-01

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)
09:33:14 PM

@Maxim Mironenko (Cloud Posse) has joined the channel

jonjitsu avatar
jonjitsu

Anyone have advice on setting up a windows EC2 where the server name is changed and then the real bootstrap script is run. Windows needs a restart after changing the server name which is a pain.

loren avatar

Good advice? No… Switch to Linux…

2022-02-02

emem avatar

i checked online on how i can updated an os on my own when it is marked as required on the maintenance tab tried doing this but it fails to start the update. I dont know if anyone can help if they have done a New Operating System update for RDS manually before the maintenance_window ?

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

@emem you can upgrade it outside of the maintenance window by modifying the rds cluster/instance, set it to the desired version, and click apply immediately

emem avatar

Thanks

emem avatar

this OS upgrade

DaniC (he/him) avatar
DaniC (he/him)

hi folks, anyone knows how i can continue troubleshooting an error i get with sts decode

An error occurred (InvalidAuthorizationMessageException) when calling the DecodeAuthorizationMessage operation: Message is not valid given by aws sts decode-authorization-message --encoded-message  ?

I’m only trying to copy an AMI from one region to another (it works except when copy to Bahrain). Open a ticket with AWS support but no progress, “it works for me” And sadly i don’t see it in Cloudtrail either hence … stuck … note i’m Admin so i shouldn’t have any issues, no SCP either

emem avatar

hi there. please has anyone seen this error after adding datadog as a sidecar to eks fargate

Your pod's cpu/memory requirements exceed the max Fargate configuration
jose.amengual avatar
jose.amengual

fargate has a limit of 16gb of ram

emem avatar

so i using fargate in eks

jose.amengual avatar
jose.amengual

not sure if EKS fargate have the same

emem avatar

do i need to increase this memory

jose.amengual avatar
jose.amengual
Fargate pod configuration - Amazon EKS

This section describes some of the unique pod configuration details for running Kubernetes pods on AWS Fargate.

jose.amengual avatar
jose.amengual

Since Amazon EKS Fargate runs only one pod per node, the scenario of evicting pods in case of fewer resources doesn’t occur. All Amazon EKS Fargate pods run with guaranteed priority, so the requested CPU and memory must be equal to the limit for all of the containers. For more information, see Configure Quality of Service for Pods in the Kubernetes documentation.

emem avatar

oh ok thanks alot.

emem avatar

so in my case i have two containers in the pod one for my app and one for datadog and they node cannot manage them both. Not sure if there is a way to request more cpu on fargate

jose.amengual avatar
jose.amengual

what if you set both containers to a lower setting to the maximum Node setting?

jose.amengual avatar
jose.amengual

you always need to account for OS mem usage, EKS api controller running, other networking virtualization memory usa etc and what is left is what you can use

jose.amengual avatar
jose.amengual

there is always and overhead on any type of virtualization

jose.amengual avatar
jose.amengual

so you can’t just overcommit

emem avatar

true.

emem avatar

what i did as a stop gap was to remove the config for cpu and memory from the both containers configuration so it can provision the amount it needs for itself, until i can fix the issue

1
1
2

2022-02-04

Wédney Yuri avatar
Wédney Yuri

Hi there, do you know what’s OCB Cloudfront in Cost Explorer? The usage type is UE-OCB and the Service is OCBCloudfront.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s an odd one. I am not sure what OCB stands for and googling for it didn’t haven’t lucky either. Others have asked about it too, but no answer. I would ask your AWS account manager.

1
Shaun Wang avatar
Shaun Wang

Is that the name of your distribution ?

matt avatar

I found a document (written mostly in Chinese) that seems to indicate that this is Out of Cycle Billing. I noticed here that they refer to “out-of-cycle” charges which are any subscription charges that aren’t upfront reservation charges or support charges.

Filtering the data that you want to view - AWS Cost Management

With Cost Explorer, you can filter how you view your AWS costs by one or more of the following values:

1
Wédney Yuri avatar
Wédney Yuri

Thanks, I’m still waiting for a reply from the AWS Account Manager. Inside the bill it’s described as “Missed CloudFront Term Commit Charge - April 2020 - March 2021”. I’ll let you know when I have more info

1
Mike Crowe avatar
Mike Crowe

Anybody using Leapp and in-browser? I started SSO using Brave, switched my default browser to Firefox and Leapp keeps sending me thru Brave to re-authenticate

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

i use leapp and in browser i login to my identity account using saml login via gsuite idp

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

then i switch roles using aws extend iam roles switcher firefox addon

2022-02-05

Nick Kocharhook avatar
Nick Kocharhook

I’m on AWS and using a bunch of cloudposse modules. I have gotten CodePipeline working with ecs-web-app and Fargate, but it only has 3 stages. I want a fourth stage where I can use the new code before approving and sending it live.

I see several threads here from the past pointing users toward Codefresh instead of CodePipeline for anything beyond the standard 3-stage pipeline, but nothing for the past few years. It being 2022 now, I wanted to check and see if that’s still the recommended approach. It looks like this is an example app using this setup.

GitHub - cloudposse/example-app: Example application for CI/CD demonstrations of Codefreshattachment image

Example application for CI/CD demonstrations of Codefresh - GitHub - cloudposse/example-app: Example application for CI/CD demonstrations of Codefresh

Nick Kocharhook avatar
Nick Kocharhook

FTR, I did end up switching to Codefresh. It’s got several nice features, including the “wait for approval” step I wanted. If anyone else wants to use it, I recommend starting here:

https://codefresh.io/docs/docs/yaml-examples/examples/amazon-ecs/#create-a-cicd-pipeline-for-ecsfargate

NB: the “ecs-deploy” step from the marketplace doesn’t let you provide an image tag! So use the step provided in the link above instead.

You can begin by coding your pipeline yaml in their inline editor, and then switch to a file checked into your Git repo.

Approval

How to Pause Pipelines for Manual Approval

Amazon ECS/Fargate

How to use Codefresh to deploy Docker containers to ECS/Fargate

Creating Pipelines

How to define Pipelines in Codefresh

Nick Kocharhook avatar
Nick Kocharhook

One thing to be aware of: Codefresh does some behind-the-scenes ECR integration for you. If you change your access key credentials, this can be a problem. Read up on it in their docs.

Amazon EC2 Container Registry

Learn how to use the Amazon Docker Registry in Codefresh

Nick Kocharhook avatar
Nick Kocharhook

I am now using Codefresh along with ecs-web-app for deployment. codepipeline_enabled is set to false. Codefresh works well, and my task definition gets updated and service re-deployed when I push to GitHub.

However, now when I make a change to Terraform and then plan and apply, the task definition gets changed. The new TD assumes that the image will be tagged with its sha, which is different from the tag I’m using on Codefresh (the short commit rev).

I haven’t set ignore_changes_task_definition, but that’s true by default. Shouldn’t this prevent the task definition from changing? If not, how do I achieve that? I am happy for the service to be managed by ecs-web-app, but I need Codefresh to be in charge of ECR and the task definition.

Nick Kocharhook avatar
Nick Kocharhook

I suppose an alternative approach would be to have Codefresh tag the images with the sha256 as well. Feels pretty hacky, though.

Nick Kocharhook avatar
Nick Kocharhook

I did a search and discovered another thread from a few months back on this same issue. The advice was succinct. https://sweetops.slack.com/archives/CB6GHNLG0/p1631503151175000?thread_ts=1631287236.173300&cid=CB6GHNLG0

I would use the web app module more as a reference for how to tie all the other modules together

2022-02-06

Robert avatar

Hi there. Does anyone have any experience yet with deploying AWS KMS “multi-region kms key” with EKS clusters? In other words could a multi-region kms key be used for a cluster in us-east-1 and also for another cluster in us-west-2? I am tasked with some research into whether or not this a) even works and b) is an attractive solution. We are currently running a “live” cluster in us-east-1 and a “backup” cluster in us-west-2. Could we run both using the multi-region kms key as the cluster encryption key? Has CP covered this yet with a module, by chance?

Constantine Kurianoff avatar
Constantine Kurianoff

My answer will be not EKS nor Cloud Posse-specific: the multi-region KMS key works in a way that

• you create a “master” key (aws_kms_key) in us-east-1 and set “multi_region” parameter to “true”.

• you then create a KMS key replica (aws_kms_key_replica) in another region and point it to master_key_arn (or id, please check terraform docs). Master key’s key material will be copied from the master key to the replica, and you could use master in the first region and the replica in the second region. The advantage is that data encrypted by one key may be decrypted by another because they are using the same key material. If that is what you need, then master/replica is your solution. There’s no other way of my knowing to use the same key across regions. But I may be wrong.

2
1
Robert avatar

Thanks Contantine. All your background info is totallly correct from my additonal research. In our case it turned out that MRK don’t make sense at this point becuase we have a very predictable “active/passive” design with respect to Regions. MRK it seems would have much greater appeal if we had live applications in both regions with data and client apps present in both Regions. In our case we replicate data to the non-active region , but the client apps only ever failover when we are reacting to AWS outages or other DR/BC activities.

1

2022-02-07

Almondovar avatar
Almondovar

Hi colleagues, does anyone know how we can configure our s3 buckets so we can avoid the rate exceeded error?

ThrottlingException: Rate exceeded
        status code: 400, request id
Zach avatar

S3 throttles based on prefix, so you can restructure your bucket storage to distribute the objects across smaller prefixes.

1
Almondovar avatar
Almondovar

Thank you for your response, i am afraid i cant do that easily - i am actually using driftctl to scan terraform states that reside in these s3 buckets

Zach avatar

uhhhh

Zach avatar

you may have an application issue in that case

Zach avatar

you’re allowed 3500 PUT and 5000 GET per second, per prefix

Zach avatar

might be a KMS error instead? That’s still like 1000 per second

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you configure driftctl to skip iterating the bucket? with that many objects there’s a 99.9% chance that you will have 99.9% drift

2022-02-08

Almondovar avatar
Almondovar

Hi colleagues, we got an issue with two REST APIs services can not create connection. Our REST API has deployed in AWS gateway, it was created as HTTPS page with API key. When I try to connect via postman software (only https address and x-api-key are set) from company’s intranet (but also tested by tethering with our mobile’s 4g internet), it works. But, when NB-IoT supplier try to connect they receive “403 Forbidden”. Cloud watch logs shows only our connection, whether that are correct or not correct (e.g. if we try to connect without key). We dont have any waf configured to protect the api aws gateway, can someone point me to the proper troubleshooting steps please?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub - cloudposse/terraform-aws-api-gatewayattachment image

Contribute to cloudposse/terraform-aws-api-gateway development by creating an account on GitHub.

Grummfy avatar
Grummfy

if the logs of api gateway show only your connection try to see if your your supplier is in the same area of you or try using a vpn to see if it’s not something else.

2022-02-09

greg n avatar

Is it possible that these metrics should be different for ECS Fargate tasks, ie. ECS/ContainerInsights vs AWS/ECS namespace ? https://github.com/cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms/blob/master/main.tf#L57-L58

  ~ resource "aws_cloudwatch_metric_alarm" "cpu_utilization_high" {
      - datapoints_to_alarm       = 2 -> null
        id                        = "linius-services-qa-assembly-cpu-utilization-high"
      ~ metric_name               = "CpuUtilized" -> "CPUUtilization"
      ~ namespace                 = "ECS/ContainerInsights" -> "AWS/ECS"
        tags                      = {}
        # (15 unchanged attributes hidden)
    } 

I’m having a hard time finding / making sense of the difference but alarms aren’t firing as I might have expected with AWS/ECS?

terraform-aws-ecs-cloudwatch-sns-alarms/main.tf at master · cloudposse/terraform-aws-ecs-cloudwatch-sns-alarmsattachment image

Terraform module to create CloudWatch Alarms on ECS Service level metrics. - terraform-aws-ecs-cloudwatch-sns-alarms/main.tf at master · cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms

Balazs Varga avatar
Balazs Varga

hello all. we have a blackbox exporter to monitor our apps behind aws elb (classic currently). Sometimes I see i/o timeout on remote app side and no logs on our ELB side. Any idea?

2022-02-10

David avatar

I am seeing quite a bit of InsufficientInstanceCapacity when I try to launch EC2s into a randomly chosen subnet from a VPC. Is there a way I can tell AWS during a RunInstancesCommand or similar command that I want to allow AWS to use any subnet within a VPC? It seems like my only option to put the EC2 into a VPC is to specify the SubnetId param, but ideally I’d like to pass in a list of acceptable subnets, not just one

z0rc3r avatar

I’d recommend to run instances via Autoscale Groups, so available capacity fishing will happen automatically.

z0rc3r avatar

Also additional handling is requited, maybe you just hit account limit and need to raise via support.

Wédney Yuri avatar
Wédney Yuri

+1 for ASG. I also recommend you enable more than a single instance type.

David avatar

That makes sense, thank you both. These are single-use CI runners, but I guess I can try launching an autoscaling group with 1 max instance just for the better support

awl avatar

Has anyone found a tool that can facilitate mass migration of data from one tier of Glacier to the other?

awl avatar

Sort of. We have some files in legacy Glacier (now known as Flex) and we want to move to the new Glacier tier called Instant Retrieval. They were/are being sent to Glacier via those policies now, but in this case we are looking to move existing files between glacier tiers.

Our AWS contacts have explained that it isn’t as easy as a simple transition. But I figured with the new Glacier tiers, we were probably not the only ones who want to move out of flex.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha interesting! Good one for #office-hours

1
jose.amengual avatar
jose.amengual

I remember looking into this a while back and at the time there was not way to move between tiers

jose.amengual avatar
jose.amengual

what I was supposed to do was to pull EVERYTHING from glacier and start over but pushing it the right tier

1
loren avatar

Geez, it is really difficult to completely block public ingress! Any instance with a public IP is reachable, even with no IGW, but if that instance does have a route out a IGW/natgw, the return traffic will succeed also… https://twitter.com/__steele/status/1490566494517825547?t=WAlMvmEq8zOXSdRL8HCiAQ&s=19

attachment image

In the following AWS VPC architectures, can you send data from:

A -> B? B -> A?

A -> C? C -> A?

What path do the packets (attempt to) take? What about the return path?

The answers genuinely surprised me and has got me questioning my mental model of VPCs. https://pbs.twimg.com/media/FK-FhvmVgAAk56q.jpg

jose.amengual avatar
jose.amengual

well if you have a SCP that does not allow public IPs and you deploy only private subnets without the 0.0.0.0 route it should work no?

attachment image

In the following AWS VPC architectures, can you send data from:

A -> B? B -> A?

A -> C? C -> A?

What path do the packets (attempt to) take? What about the return path?

The answers genuinely surprised me and has got me questioning my mental model of VPCs. https://pbs.twimg.com/media/FK-FhvmVgAAk56q.jpg

loren avatar

And what is that SCP, please tell?!?

jose.amengual avatar
jose.amengual
Is there AWS SCP to block users from creating elastic IP EC2?

Can an AWS organization account stop or block child AWS accounts from launching public AWS EC2 instances having elastic ip? Would like those users within those accounts to launch only private ec2(s…

jose.amengual avatar
jose.amengual
Example SCPs for Amazon Virtual Private Cloud (Amazon VPC) - AWS Organizations

Examples in this category This SCP prevents users or roles in any affected account from deleting Amazon Elastic Compute Cloud (Amazon EC2) flow logs or CloudWatch log groups or log streams. This SCP prevents users or roles in any affected account from changing the configuration of your Amazon EC2 virtual private clouds (VPCs) to grant them direct access to the internet. It doesn’t block existing direct access or any access that routes through your on-premises network environment.

jose.amengual avatar
jose.amengual
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": [
        "ec2:AttachInternetGateway",
        "ec2:CreateInternetGateway",
        "ec2:CreateEgressOnlyInternetGateway",
        "ec2:CreateVpcPeeringConnection",
        "ec2:AcceptVpcPeeringConnection",
        "globalaccelerator:Create*",
        "globalaccelerator:Update*"
      ],
      "Resource": "*"
    }
  ]
}
jose.amengual avatar
jose.amengual

that will apply to new resources

loren avatar

Is the first answer in that stackoverflow a joke? What is an aws config SCP?

jose.amengual avatar
jose.amengual

but if is a new account it will work

jose.amengual avatar
jose.amengual

yes I think that is a new product that person invented

jose.amengual avatar
jose.amengual

it is an alpha

jose.amengual avatar
jose.amengual

lol

loren avatar

The second answer gets closer. I’ll try it tomorrow. Previously I had only seen that condition on ec2:RunInstances, but that’s not nearly enough

jose.amengual avatar
jose.amengual

I’m working on an environment that it is exactly like that

jose.amengual avatar
jose.amengual

no internet at all

jose.amengual avatar
jose.amengual

vpc endpoints for everything

loren avatar

Yeah, same. Just not greenfield. And when you miss something like this, and think it’s ok cuz there is no IGW anyway, going back through hundreds of accounts and reviewing network captures to understand what may or may not have transpired is really not fun

loren avatar

I was really hoping the ingress nacl option on the subnets would work, but that broke all sorts of things in really fun and exciting and ticket-creating ways

jose.amengual avatar
jose.amengual
Amazon Web Services - Automated Evidence for SOC 2, HIPAA and more!attachment image

Tugboat Logic integrates with AWS to automatically collect security evidence for your security audits like SOC 2, ISO 27001 and more!

1
loren avatar

Played with this some more, and still cannot find a good IAM policy (or SCP)… the second stackoverflow answer suggested a statement on the actions:

ec2:AllocateAddress
ec2:AssociateAddress
ec2:CreateNetworkInterface

using the condition key:

ec2:AssociatePublicIpAddress

but! none of those actions support that condition key. these are the actions that support the condition key:

CreateFleet
CreateNetworkInsightsPath
CreateTags (???)
DeleteNetworkInterfacePermission
ModifyFleet
ReplaceRoute
RequestSpotInstances
RunInstances
RunScheduledInstances

but you can’t use it on:

StartInstances
CreateLaunchTemplate
CreateLaunchTemplateVersion

and this is just ec2. there are plenty of other services you can deploy into a vpc and that could get a public IP…

jose.amengual avatar
jose.amengual

that sucks

1

2022-02-11

Don avatar

Hi All, I’m trying to use https://github.com/cloudposse/terraform-aws-s3-bucket/releases/tag/0.47.0 but when I run terraform with s3_replication_enabled = false , I get the following error: 168: for_each = local.s3_replication_rules == null ? [] : local.s3_replication_rules , I don’t understand what the module is expecting

Release v0.47.0 · cloudposse/terraform-aws-s3-bucketattachment image

Enhancements Feat: Enable Replication Metrics @max-lobur (#116) what Enable replication metrics by default Allow override via variables AWS provider requirements update due to: hashicorp/ter…

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

this is due to the 4.x provider

Release v0.47.0 · cloudposse/terraform-aws-s3-bucketattachment image

Enhancements Feat: Enable Replication Metrics @max-lobur (#116) what Enable replication metrics by default Allow override via variables AWS provider requirements update due to: hashicorp/ter…

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

add a new file locally to pin it

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

You can pin the provider version by updating [versions.tf](http://versions.tf) in the component to:

required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}
DaniC (he/him) avatar
DaniC (he/him)

anyone here has switched from the default 1:1 lambda <> cw log group mapping to N lambdas -> 1 single CW log group -> 1 subscription filter ?

the use case is v simple: consolidate all the logs in one place and then ship them out to 3rd party aggregation instead of having 1 filter per CW group (which goes messy if you have loads of lambdas, not to mention $$)

Don avatar

Hi All, what is the recommended way for EKS pods to CRUD on S3 buckets ? Is this https://github.com/cloudposse/terraform-aws-iam-chamber-s3-role/tree/0.7.5 still being supported ?

2022-02-12

Nishant avatar
Nishant
How to Get Free AWS Credits

AWS is generous to offer credits to cloud engineers and startups. Experiement or bootstrap your startup with AWS credits.

2022-02-14

Almondovar avatar
Almondovar

Hi colleagues, has anyone applied the Fail2ban plugin for traefik on EKS? any issues with it? is it production ready? something that we need to have in mind during its implementation? thanks!

Shreyank Sharma avatar
Shreyank Sharma

Hi all, we have a ElasticBean Stack instance platform is tomcat 8.5 with java8 running on 64 bit Amazon Linux.

everything was working fine, but recently. if we try to access the endpoint rand only we get Service Unavailable.

when i downloaded the logs by clicking on requist logs under elasticbeanstack_error_logs i cloud see following logs-

[Mon Feb 14 10:00:58.338035 2022] [proxy:error] [pid 14882:tid 139757313533696] (13)Permission denied: AH02454: HTTP: attempt to connect to Unix domain socket /var/run/httpd/ (localhost) failed [Mon Feb 14 10:00:58.338078 2022] [proxy_http:error] [pid 14882:tid 139757313533696] [client <private-ip-here>:12566] AH01114: HTTP: failed to make connection to backend: httpd-UDS, referer: http://<custom-end-point>/1/<name.jsp>?s=sec$$4P!&refresh=300 [Mon Feb 14 10:43:40.663468 2022] [proxy:error] [pid 14882:tid 139757120071424] (13)Permission denied: AH02454: HTTP: attempt to connect to Unix domain socket /var/run/httpd/ (localhost) failed [Mon Feb 14 10:43:40.663518 2022] [proxy_http:error] [pid 14882:tid 139757120071424] [client <private-ip-here>:21136] AH01114: HTTP: failed to make connection to backend: httpd-UDS

what more I can check to understand what is the issue to solve it. any help is appreciated thank you.

He Qing avatar
He Qing

│ Error: Unsupported attribute │ │ on main.tf line 16, in locals: │ 16: info.stage │ │ This object does not have an attribute named “stage”.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Best to use atmos

He Qing avatar
He Qing

atmos terraform apply account –stack gbl-root

He Qing avatar
He Qing

atmos terraform deploy account-map –stack gbl-root

He Qing avatar
He Qing

at terraform-aws-components, it looks like we should declare ‘stage’ in account, which is needed by account-map.

He Qing avatar
He Qing

maybe, when we didn’t declare ‘stage’ in account, it should be equal to account name.

2022-02-15

2022-02-16

Stephen Bennett avatar
Stephen Bennett

Hi, im trying to use the cloudposse modules to deploy cloudtrail (with org enabled) but getting errors, having issues.

when i use a custom kms (also deployed via the cloudposse module). It looks to be an issue with IAM needing extra permissions. am i missing something in the module to resolve this or do i need to create a custom IAM and attach it?

Error: Error creating CloudTrail: InsufficientEncryptionPolicyException: Insufficient permissions to access S3 bucket organizationalcloudtrail or KMS key

Id like to log to cloudwatch as well, but again not sure if i should be creating my own IAM and attach for the cloud_watch_logs_role_arn

Thanks for any help

2022-02-17

Frank avatar

Does anyone know whether it is possible to use a central OIDC provider within member accounts? The aws sts assume-role-with-web-identity call supports a --provider-id flag seems only to be for OAuth2 .. Or should each member account have their own OIDC provider configured?

2022-02-22

Shreyank Sharma avatar
Shreyank Sharma

Hi all, We are using a webserver with ElasticBeanstalk from 2019., the platform is tomcat 8.5 with java8 running on 64 bit Amazon Linux. Apache as proxy recently (from Jan 30th) we started getting Service Unavailable issues if go to the endpoint from time to time. and if we refresh 2-3 times it will get resolved on its own.

then I download full logs. under elasticbeanstalk-error_log I can see
[Mon Feb 21 10:00:58.338035 2022] [proxy:error] [pid 14882:tid 139757313533696] (13)Permission denied: AH02454: HTTP: attempt to connect to Unix domain socket /var/run/httpd/ (localhost) failed
[Mon Feb 21 10:00:58.338078 2022] [proxy_http:error] [pid 14882:tid 139757313533696] [client <private-ip-here
:12566] AH01114: HTTP: failed to make connection to backend: httpd-UDS, referer: http://<custom-end-point
/1/<name.jsp
?s=sec$$4P!&refresh=300
[Mon Feb 21 10:43:40.663468 2022] [proxy:error] [pid 14882:tid 139757120071424] (13)Permission denied: AH02454: HTTP: attempt to connect to Unix domain socket /var/run/httpd/ (localhost) failed
[Mon Feb 21 10:43:40.663518 2022] [proxy_http:error] [pid 14882:tid 139757120071424] [client <private-ip-here
:21136] AH01114: HTTP: failed to make connection to backend: httpd-UDS
repeated multiple times from Jan30th. and when I look at access.log I can see 503 error log exactly at the same time when permission denied error logs in elasticbeanstalk-error_log And I looked at the running process using ps -aux | grep httpd and ps -aux | grep tomcat both are running from 2019 and have no restarts. what more I can do to troubleshoot these issues

thanks

venkata.mutyala avatar
venkata.mutyala

Has the application version changed or anything else about the environment?

It sounds like the app is crashing periodically and since the app is crashing the health check may be failing and then the load balancer returns a 503 since you have no healthy nodes.

Shreyank Sharma avatar
Shreyank Sharma

Thanks for the reply @venkata.mutyala, if I run ps -aux both httpd and tomcat processes shows its START as 2019 only

venkata.mutyala avatar
venkata.mutyala

Any recent changes to the environment? New code? New settings?

Shreyank Sharma avatar
Shreyank Sharma

No changes to environment.

venkata.mutyala avatar
venkata.mutyala

I assume you went through all the log files that beanstalk provided in the console?

venkata.mutyala avatar
venkata.mutyala

Have you considered replacing a node and seeing if that resolves the issue? Not saying it prevents it from happening again but given the immutable nature of beanstalk I would be curious to know if it helps.

1
Marcelo avatar
Marcelo

see your HD usage df -h

see your Free Memory free -m

see your CPU usage: htop or top

Nishant avatar
Nishant

We all know AWS regions have disparity in service hosting. As of today, us-east-1 (N. Virginia) hosts a max of 306 services while ap-northeast-3 (Osaka) has only 127 services hosted. I needed to answer, for any given AWS service which regions host the service, what services are hosted in a given region. Found a few ways. https://www.cloudyali.io/blogs/how-to-find-all-regions-an-aws-service-is-available

How to find all regions an AWS Service is available

Not all AWS Services are available in all AWS regions. Simple ways you can find all AWS regions AWS services are available.

kevcube avatar
kevcube

Is anyone using AWS SSM Session manager to enable devs to connect to a staging RDS instance, and NOT using ssh keys/connections managed through SSM?

kevcube avatar
kevcube

Every blog post I see recommends using ec2-instance-connect to put SSH keys onto the instance, then opening an SSH tunnel to the instance to access RDS

I would ideally like to just use Session Manager to establish the tunnel if that’s possible

Oriol Fabregas avatar
Oriol Fabregas

Yes, we do. You use SSM but it still is an SSH document being executed, temporarily, to upload your public SSH key into the remote host. Then establish the connection with SSM

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmm… the whole point IMO with session manager is not to use SSH keys, but IAM.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
AWS SSM Session Manager Port Forwarding to RDS without SSHattachment image

How-to use Session Manager to establish a secure connection to an RDS running in a private subnet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There are cli’s that make it even easier

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
claranet/sshm

Easy connect on EC2 instances thanks to AWS System Manager Agent. Just use your ~/.aws/profile to easily select the instance you want to connect on.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Search slack for “sshm” and you’ll find related conversations

Eyal avatar

Hi, I have a question about VPC peering using multiple state files. We have two VPCs (vpc1 and vpc2) that we want to create peering. The peering was created successfully from vpc2 and I can resolve R53, connect to instances on both VPCs, etc. The problem is that when I run terraform plan from vpc1 directory, it doesn’t recognize the route tables added by vpc2 and wants to remove them. How can I get this to work? thanks!

venkata.mutyala avatar
venkata.mutyala

Have you used remote states?

Eyal avatar

yes, I used remote state in vpc2 to create the peering.. do I need to use something similar in vpc1?

venkata.mutyala avatar
venkata.mutyala

So the way I have done it in the past is create the VPCs first and then run the peering/route table creation. Basically have VPCs live in one or more of their own states and then do all the peering/route tables independent of any VPC creation… Does this make sense… VPC1/VPC2 first, then vpc peering/routes.

Eyal avatar

I am using the cloudposse vpc_peering module, which already creates the routes in both VPCs (both VPCs they are in the same AWS account).. Should I create the vpc_peering on VPC2 and then create the routes manually in VPC1?

(should I ask this in the terraform-aws-modules channel?)

1
sjl2024 avatar
sjl2024

Hello all! At risk of sounding obvious, why is it a best-practice from a compliance/ops standpoint to put all s3 buckets into their own AWS project? I ask because it sort of breaks the terraform mold of working with app-specific buckets (where normally you’d put them under the “dev” or “prod” account, as opposed to in “artifacts”)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We all know AWS regions have disparity in service hosting. As of today, us-east-1 (N. Virginia) hosts a max of 306 services while ap-northeast-3 (Osaka) has only 127 services hosted. I needed to answer, for any given AWS service which regions host the service, what services are hosted in a given region. Found a few ways. https://www.cloudyali.io/blogs/how-to-find-all-regions-an-aws-service-is-available

2022-02-23

2022-02-24

Andrea Cavagna avatar
Andrea Cavagna

Hi Everyone! We just released the brand new UX/UI of Leapp, the local Desktop App that helps you in managing Aws credentials:

https://twitter.com/a_cava94/status/1496846237722632196

check it out and let me know what you think!

Define way #developers manage Cloud credentials locally.

This is why we create Leapp.

This is why we make it #opensource.

Today with a brand new UX/UI

A thread

4
venkata.mutyala avatar
venkata.mutyala

Cool. I’ll check this out!

Define way #developers manage Cloud credentials locally.

This is why we create Leapp.

This is why we make it #opensource.

Today with a brand new UX/UI

A thread

Andrea Cavagna avatar
Andrea Cavagna

Thanks awesome! let me know

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

like the fresh new look

Andrea Cavagna avatar
Andrea Cavagna

Thank you! we tried to make it more a Desktop App like, with right click and double click actions

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

cool can’t wait to try it

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’ve got it setup for both IAM User and SSO for 10 accounts

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

the changelog for 0.9.0 looks promising https://www.leapp.cloud/releases

Leapp - Releases

Leapp grants to the users the generation of temporary credentials only for accessing the Cloud programmatically.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

has been making working with multiple AWS accounts so much easier

1
Andrea Cavagna avatar
Andrea Cavagna

Thant’s great

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

hmm it now takes 2 clicks to login

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

also is dark mode on the roadmap for the UI ? it’s very white on my dark screen

Andrea Cavagna avatar
Andrea Cavagna

but with a double click it start straight away agah

Andrea Cavagna avatar
Andrea Cavagna

I’m a big fan of dark mode, I think theme will be added soon in a weekend of contributing

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

that weekend would be a month for me of contributing

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

i did create a dark mode ticket though for those weekend contributors

Andrea Cavagna avatar
Andrea Cavagna

Great idea

Balazs Varga avatar
Balazs Varga

does any know about when aurora serverless v2 will go live ?

2022-02-25

Wilson Mar avatar
Wilson Mar

This may not the place to bring up the issue on https://registry.terraform.io/modules/cloudposse/kms-key/aws/latest where we see a yellow tag in:

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

most likely because enable rotation isnt true by default

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)
cloudposse/terraform-aws-kms-key

Terraform module to provision a KMS key with alias

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

cc: @Yonatan Koren (Codefresh) i can never load bridgecrew properly. it shows up empty for me. can you see the error?

Yonatan Koren (Codefresh) avatar
Yonatan Koren (Codefresh)

@RB (Ronak) (Cloud Posse) no it’s blank for me as well. I’ve had some issues with the BC UI before.

Wilson Mar avatar
Wilson Mar

Who should see the above?

Eric Berg avatar
Eric Berg

better to ask this in #kubernetes

2022-02-26

Brij S avatar

Hi all i’m attempting to use eksctl within gitlab pipelines. Curerntly i’m using the alpine/k8s image but im running into the follow error:

Error: checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed
caused by: Post "<https://sts.None.amazonaws.com/>": dial tcp: lookup sts.None.amazonaws.com on 172.20.0.10:53: no such host 

Im just using regular IAM user keys as env vars in the project. I dont understand why its looking for a role ARN here? This post here details the problem, however the solutions all seem like one offs. https://github.com/weaveworks/eksctl/issues/1408

I’m getting started with EKS and I’m following Amazon guide
https://docs.aws.amazon.com/en_pv/eks/latest/userguide/getting-started-eksctl.html

I’ve defined an Administrator user under the Administrators group with the following policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "*",
      "Resource": "*"
    }
  ]
}

and with it I’ve set

$ cat ~/.aws/credentials                                                                                                                               
[default]
aws_access_key_id = ___
aws_secret_access_key = ___

$ cat ~/.aws/config                                                                                                                                    
[default]
region = us-west-2
output = json

I’m able to

$ aws eks list-clusters
{
    "clusters": []
}

$ aws sts get-caller-identity                                                                                                                          {
    "UserId": "___",
    "Account": "___",
    "Arn": "arn:aws:iam::___:user/Administrator"
}

but the following fails

$ eksctl create cluster \                                                                                                                              --name prod \      
--version 1.14 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami auto
[ℹ]  using region us-west-2
[✖]  checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: net/http: TLS handshake timeout

$ eksctl get cluster
2019-10-06T22:58:51-07:00 [✖]  checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: net/http: TLS handshake timeout

What am I missing?

managedkaos avatar
managedkaos

I will admit, I haven’t used eksctl nor have i followed the link to see proposed solutions. So take that as you will :sweat_smile:

with that said, when i see None in something like this:

<https://sts.None.amazonaws.com/>": 

I start thinking that the AWS_DEFAULT_REGION environment variable is not set.

Most AWS services are tied to an regional endpoint and I would assume the CLIs build the URL based on the AWS_DEFAULT_REGION env var.

I would reccommend checking your configuration to make sure you are setting AWS_DEFAULT_REGION correctly for where your services are located.

I’m getting started with EKS and I’m following Amazon guide
https://docs.aws.amazon.com/en_pv/eks/latest/userguide/getting-started-eksctl.html

I’ve defined an Administrator user under the Administrators group with the following policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "*",
      "Resource": "*"
    }
  ]
}

and with it I’ve set

$ cat ~/.aws/credentials                                                                                                                               
[default]
aws_access_key_id = ___
aws_secret_access_key = ___

$ cat ~/.aws/config                                                                                                                                    
[default]
region = us-west-2
output = json

I’m able to

$ aws eks list-clusters
{
    "clusters": []
}

$ aws sts get-caller-identity                                                                                                                          {
    "UserId": "___",
    "Account": "___",
    "Arn": "arn:aws:iam::___:user/Administrator"
}

but the following fails

$ eksctl create cluster \                                                                                                                              --name prod \      
--version 1.14 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami auto
[ℹ]  using region us-west-2
[✖]  checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: net/http: TLS handshake timeout

$ eksctl get cluster
2019-10-06T22:58:51-07:00 [✖]  checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed
caused by: Post <https://sts.amazonaws.com/>: net/http: TLS handshake timeout

What am I missing?

Brij S avatar
eksctl:create:
  stage: deploy
  script:
    - aws configure set default.region us-east-1; aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID ; aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
    - if ! eksctl get cluster -n $CLUSTER_NAME ;then eksctl create cluster -f config.yaml ; else eksctl update cluster -f  config.yaml ; fi
  variables:
    CLUSTER_NAME: npd
    AWS_DEFAULT_REGION: us-east-1

I have tried the default region - no luck

2022-02-28

Steven Kalt avatar
Steven Kalt

Hi, does anyone know how to list the values of aws:PrincipalTag/* associated with an identity? I’m trying to debug a POC, non-load-bearing AWS SSO connection with JumpCloud and would love some more visibility into what’s going on.

loren avatar

i think they’re visible in CloudTrail events?

loren avatar

otherwise, i’ve also snagged the saml assertion using browser dev tools and decoded it to see what is sent…

loren avatar

(if you’re using saml!)

Steven Kalt avatar
Steven Kalt

I am, and I do see my expected assertions in the SAML that JumpCloud’s sending. I also see the principalId in CloudTrail along with a list of sessionContext:{attributes:{...}} , but I don’t see the properties I was expecting.

Steven Kalt avatar
Steven Kalt

Which may very well be as good as I’m going to get.

Chandler Forrest avatar
Chandler Forrest

I second saml tracer (firefox) for viewing the attributes

loren avatar

i assume your trust policy on the iam role allows sts:TagSession? if not, i don’t think it would work at all… at least, not if the tags were set properly on the JumpCloud side…

Steven Kalt avatar
Steven Kalt

That’s correct, the AWS SSO-managed role has sts:TagSession . Looking up the docs on sts:TagSession eventually got me to the hint I needed, which is to filter CloudTrail events to Event Name: AssumeRoleWithSaml which does have the list of principalTags written out.

Thanks for your help (and the tip on saml tracer)!

1
1
loren avatar

woot!

    keyboard_arrow_up