#aws (2021-12)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2021-12-01

Balazs Varga avatar
Balazs Varga

if we don’t use tags on cloudwatch… can I still use the stringlike to limit log access? I see the describe part is global, but to get the log I would like to limit user to specified logstreamname (contains dev)

Henry Carter avatar
Henry Carter

I can use ssm aws:domainJoin but it joins with the aws hostname, I tried using userdata with powershell Rename-Computer but stops the domain join… Does anyone know if there’s an simple way to rename and join a windows instance to aws active directory?

2021-12-02

jimp avatar

anyone going to re:play at re:invent?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I will probably be going

Andrea Cavagna avatar
Andrea Cavagna

I will!

Ishan Sharma avatar
Ishan Sharma

Hello #terraform-aws-modules

I am trying to create AWS organizations account via Terraform and further want to deploy resources in that using Azure DevOps pipeline. I am still not that experienced with AWS IAM. Modules are done , Account is provisioned ( using IAM user with appropriate rights ) now just confused with role assume or trust policies. What should I really do in terraform context to achieve my resource deployment using the IAM user.

Any help would be greatly appreciated  Thank you

2021-12-03

Juan Soto avatar
Juan Soto

hi, I am a middle of process of migrating to AWS. But for licensing purposes we need to preserve the MAC address of an specific VM. Are there any way to migrate the VM to AWS and keep the Mac Address of the VM?

DaniC (he/him) avatar
DaniC (he/him)

wave what you can do is once inside AWS/ EC2 you can allocated an eni , use that eni/ mac for your license and then if in the future you need to spun a new EC2 or migrate then you can take with you the ENI.

Saying that i don’t think you can “migrate/ import” MAC into AWS

Stephen Tan avatar
Stephen Tan

You can’t assign a MAC to an ENI for sure - at least not via the AWS API - which means it’s just not possible for us mere mortals. @DaniC (he/him) is right - just create an ENI and use that MAC for your licence - surely the license can be re-issued if need be? This is a very Legacy App if it depends on MAC address assignment ( which can be spoofed anyway right? ). Just don’t ever nuke the ENI! This use case has come up before with AWS: https://forums.aws.amazon.com/thread.jspa?threadID=227386

jose.amengual avatar
jose.amengual

no cloud provider will ever allow that

jose.amengual avatar
jose.amengual

but if the license is not tight to an specific interface name, then you can create a lo1 or any other interface that allows you to define the mac

jose.amengual avatar
jose.amengual

usually license per mac software do link the mac to the interface name

Stephen Tan avatar
Stephen Tan

I like the idea of using a local interface but surely that won’t be allowed as the idea of this legacy bullshit is to limit the use of licensing… anyway, worth a go

2021-12-04

2021-12-05

Joe Niland avatar
Joe Niland

Anyone else notice that AWS Support request more info on every SES Production use request, no matter what is put in the original support ticket request?

Markus Muehlberger avatar
Markus Muehlberger

They’ve been a right pain in the backside when we expanded our use of AWS accounts. They required more and more detail with every account, even though we were only shifting workloads between accounts.

We had to escalate to our Account Manager to get that sorted.

Joe Niland avatar
Joe Niland

Thanks for confirming it’s not just my observation!

Joe Niland avatar
Joe Niland

Definitely think they should have some way of indicating Staging/Test/UAT usage

2021-12-07

2021-12-08

Almondovar avatar
Almondovar

Hi all, although we got 2 AZ’s we have only 1 nat Gateway, and we now want to add a second on AZ B, do you know which is the best way to “predict” its costs? Thanks 

Zach avatar
Logically Isolated Virtual Network - Amazon VPC Pricing - Amazon Web Services

Learn about pricing for Amazon VPC, a service that lets you launch AWS resources in a logically isolated virtual network that you define.

Zach avatar

NGW is a static hourly rate, per NGW deployed

Rafi Greenberg avatar
Rafi Greenberg

We switched to 1 NGW per AZ after there was an AZ outage a few months ago

1
julie avatar
AWS Pricing Calculator

AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.

1
julie avatar

Not perfect by any means, but can give you a ballpark

Almondovar avatar
Almondovar

thank you all!

2021-12-09

AugustasV avatar
AugustasV

How to allow different group access different directories and command in EC2 machines? Using AWS Systems manager

nileshsharma.0311 avatar
nileshsharma.0311

Hi All , I’m getting a unable to load credentials from service endpoint while running a ecs task , the task role arn is set and the IAM role has permissions and has ecs-tasks in trust policy

nileshsharma.0311 avatar
nileshsharma.0311

Any help will be appreciated

nileshsharma.0311 avatar
nileshsharma.0311

Thanks

2021-12-10

mikesew avatar
mikesew

Question about the Cloudwatch Log Groups that are enabled by turning on RDS Enabling Exports (ie. alert, audit, listener,trace) . Do those log groups get the same tags as your RDS instance? It seems like they dont get tagged, we have to back and manually tag them. Instances are provisioned wiht terraform, therefore I absentmindedly assumed everything got tagged by default by virtue of a tags = var.tags param.

/aws/rds/instance/my-rds-db-01/alert
/aws/rds/instance/my-rds-db-01/audit
/aws/rds/instance/my-rds-db-01/listener
/aws/rds/instance/my-rds-db-01/trace
Alex Jurkiewicz avatar
Alex Jurkiewicz

they don’t, like a few different places where AWS creates log groups for you. And there’s no good way to manage them with Terraform. The best approach is probably to get AWS to create the log group, then hardcode the log group name and import the resource into Terraform. Then TF can manage the tags (& retention) for ya

1
mikesew avatar
mikesew

thanks for confirming – the other thing i hear being done is implemetning an auto-tagger lambda function that scans for things like that. I’ll probably focus that on only things like cloudwatch log groups. seems like that’s the main thing i have to address. even ec2 or s3, i think we’d handle via terraform.

2021-12-12

2021-12-13

nileshsharma.0311 avatar
nileshsharma.0311

Hello everyone , any suggestions on this ? - https://sweetops.slack.com/archives/CCT1E7JJY/p1639116866191300

Hi All , I’m getting a unable to load credentials from service endpoint while running a ecs task , the task role arn is set and the IAM role has permissions and has ecs-tasks in trust policy

tomkinson avatar
tomkinson

Hi all, can I post a websocket question? I will but if not the place please let me know I can remove but it’s on AWS.

I have a websocket in api gateway connected to a lambda that looks like this:

const AWS = require(‘aws-sdk’); const amqp = require(‘amqplib’);

const api = new AWS.ApiGatewayManagementApi({ endpoint: ‘MY_ENDPOINT’, });

async function sendMsgToApp(response, connectionId) { console.log(‘=========== posting reply’); const params = { ConnectionId: connectionId, Data: Buffer.from(response), }; return api.postToConnection(params).promise(); }

let rmqServerUrl = ‘MY_RMQ_SERVER_URL’; let rmqServerConn = null;

exports.handler = async event => { console.log(‘websocket event:’, event); const { routeKey: route, connectionId } = event.requestContext;

switch (route) {
    case '$connect':
        console.log('user connected');
        const creds = event.queryStringParameters.x;
        console.log('============ x.length:', creds.length);
        const decodedCreds = Buffer.from(creds, 'base64').toString('utf-8');
        try {
            const conn = await amqp.connect(
                `amqps://${decodedCreds}@${rmqServerUrl}`
            );
            const channel = await conn.createChannel();
            console.log('============ created channel successfully:');
            rmqServerConn = conn;
            const [userId] = decodedCreds.split(':');
            const { queue } = await channel.assertQueue(userId, {
                durable: true,
                autoDelete: false,
            });
            console.log('============ userId:', userId, 'queue:', queue);
            channel.consume(queue, msg => {
                console.log('========== msg:', msg);
                const { content } = msg;
                const msgString = content.toString('utf-8');
                console.log('========== msgString:', msgString);
                sendMsgToApp(msgString, connectionId)
                    .then(res => {
                        console.log(
                            '================= sent queued message to the app, will ack, outcome:',
                            res
                        );
                        try {
                            channel.ack(msg);
                        } catch (e) {
                            console.log(
                                '================= error acking message:',
                                e
                            );
                        }
                    })
                    .catch(e => {
                        console.log(
                            '================= error sending queued message to the app, will not ack, error:',
                            e
                        );
                    });
            });
        } catch (e) {
            console.log(
                '=========== error initializing amqp connection',
                e
            );
            if (rmqServerConn) {
                await rmqServerConn.close();
            }
            const response = {
                statusCode: 401,
                body: JSON.stringify('failed auth!'),
            };
            return response;
        }
        break;
    case '$disconnect':
        console.log('user disconnected');
        if (rmqServerConn) {
            await rmqServerConn.close();
        }
        break;
    case 'message':
        console.log('message route');
        await sendMsgToApp('test', connectionId);
        break;
    default:
        console.log('unknown route', route);
        break;
}
const response = {
    statusCode: 200,
    body: JSON.stringify('Hello from websocket Lambda!'),
};
return response; }; The amqp connection is for a rabbitmq server that's provisioned by amazonmq. The problem I have is that messages published to the queue either do not show up at all in the .consume callback, or they only show up after the websocket is disconnected and reconnected. Essentially they're missing until a point much later after which they show up unexpectedly. That's within the websocket. Even when they do show up, they don't get sent to the client (app in this case) that's connected to the websocket. I've seen 2 different errors, but neither of them has been reproducible. The first was Channel ended, no reply will be forthcoming and the second was write ECONNRESET, and it's not clear how they would be causing this problem. What could be the problem here?

2021-12-14

DaniC (he/him) avatar
DaniC (he/him)

hi folks, not sure if this is the right channel to ask:

what sort of technique / pattern have you adopted to build golden AMIs (the easy phase) and roll it out/ upgrade when you are not in the world of stateless apps? And how does it dance with TF/ CFN + CD pipe ?

i’ve seen packer + codebuild + code pipeline + cfn but not very confident …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Packer with GitHub Actions and self-hosted runners with service accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That works well, but in hindsight a bit curious about how ec2 image builder would work and if that would be turnkey https://aws.amazon.com/image-builder/

DaniC (he/him) avatar
DaniC (he/him)

right i see, i’ll have a quick look at that too

2021-12-15

Dave Hill avatar
Dave Hill

Hey everyone. I have an NLB that points to a private haproxy instance using port 443. It works fine, but when i enable the target group attribute “Preserve client IP addresses” my NLB starts timing out on port 443. Any idea why that would happen?

Dave Hill avatar
Dave Hill

i may have found the answer. Our haproxy is in our office private ip space; When client IP preservation is enabled, targets must be in the same VPC as the Network Load Balancer, and traffic must flow directly from the Network Load Balancer to the target.

beaur97 avatar
beaur97

I feel like this should be a simple solution but I’ve asked in multiple different places and have no answer yet so I’m getting desperate. We have our environments set up as Elastic Beanstalk applications. I was tasked with adding splunk universal forwarder to these applications through the use of .ebextensions shell scripts. It hasn’t been an issue except for the fact that when using the aws cli to pull down the credentials file from s3 (aws s3api get-object) I sometimes get an error of: Partial credentials found in env, missing: AWS_SECRET_ACCESS_KEY. It’s completely random which environments error and those that work fine. The shell script is being run by ec2-user, and on the servers that fail with that error, we can ssh into them and run the command as ec2 and it works without issue. Does anyone know what would cause this/how to even look into what would be causing it?

venkata.mutyala avatar
venkata.mutyala

Try replacing the instances entirely when you change the ebextensions file

venkata.mutyala avatar
venkata.mutyala

Basically terminate the current instances and see what happens. Obviously do it one at a time to ensure your app doesn’t fall over.

beaur97 avatar
beaur97

@venkata.mutyala We tried this in one of the test environments that was failing and got the same error, sadly

venkata.mutyala avatar
venkata.mutyala

So after deleting all the instances some of them continue to work and some of them continue to throw an error?

beaur97 avatar
beaur97

@venkata.mutyala oh no, it’s application based. So one environment they will all work one environment they won’t Not EC2 instance to instance

venkata.mutyala avatar
venkata.mutyala

Are you deploying the resources via terraform?

venkata.mutyala avatar
venkata.mutyala

Yeah the error is unique. Feels like something is stepping on something somewhere

venkata.mutyala avatar
venkata.mutyala

I would suggest trying to update your terraform to ensure things are full separate. Including the keys you use to deploy so that one set cannot deploy to another environment

beaur97 avatar
beaur97

No we just build the project into a .war file with Jenkins and point the ELB application to it.

Also, for what it’s worth, these are ELB Classic instances. So they’re old, not even in VPCs

venkata.mutyala avatar
venkata.mutyala

Also confirm the AMi and such as identical between environments would be a good idea too

venkata.mutyala avatar
venkata.mutyala

Have you tried deploying the war manually via AWS console?

venkata.mutyala avatar
venkata.mutyala

Have you verified the zip that beanstalk created with the artifacts looks valid/correct?

venkata.mutyala avatar
venkata.mutyala

Maybe take a current deployment and compare it to an old one and see if something jumps out on why it’s breaking?

beaur97 avatar
beaur97

Yes it does look correct

venkata.mutyala avatar
venkata.mutyala

Have you tried rolling back the deployment manually in the console to a deployment when it was working?

beaur97 avatar
beaur97

Well the deployment doesn’t fail, it just fails to install Splunk forwarder. So it technically never had a working deployment for that specific issue

venkata.mutyala avatar
venkata.mutyala

Huh. Have you poked beanstalk support about this yet? They are usually willing to get on a screen share. This is how I learned ebextensions even existed

beaur97 avatar
beaur97

No i haven’t taken a look, usually try to see if someone knows the issue that’ll get back to me quickly, but I’ll reach out Thanks for trying!

1

2021-12-16

2021-12-21

Nikola Milic avatar
Nikola Milic

Is there an option to remove/modify task memory when using https://github.com/cloudposse/terraform-aws-ecs-container-definition I want to set only container hard memory, but not task memory

RB avatar

nope. the container definition module just returns json.

where are you defining the task definition? that’s where the task memory is set

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition#memory

RB avatar

cc: @Nikola Milic

Nikola Milic avatar
Nikola Milic

That module is where i’m defining it. @RB

module "backend_container_definition" {
  source = "cloudposse/ecs-container-definition/aws"
  version = "0.58"
  container_name  = "${var.project_name}-${terraform.workspace}-be-container"
  container_image = "${module.backend_ecr.repository_url}:latest"
  container_memory             = 1024
  container_cpu                = 256
  essential                    = true
  readonly_root_filesystem     = false
  environment = [
	  ... removed vars ...
  ]
  port_mappings = [
	{
	  containerPort = 3000
	  hostPort      = 0
	  protocol      = "tcp"
	}
  ]
  log_configuration = {
    "logDriver": "awslogs",
    "options": {
	  "awslogs-group": "awslogs-${var.project_name}-${terraform.workspace}-be-container",
	  "awslogs-region": "${var.aws_region}",
	  "awslogs-stream-prefix": "ecs"
    }
  }
  privileged                   = false
}
RB avatar

that module only returns the container json which is a separate parameter from the task memory in the ecs task definition resource

RB avatar

there are two settings, container memory and memory. the latter is the task memory

RB avatar

you’re only setting the container memory and not the task memory

Nikola Milic avatar
Nikola Milic

Ah i got it, i was looking at the wrong module for this setting. Im using https://github.com/cloudposse/terraform-aws-ecs-alb-service-task as well, so that’s where I should set the task memory from default 512. Thanks for the tips!

1
Nikola Milic avatar
Nikola Milic

The reason why I’m asking is that I’ve set container_memory = 1024 and I get this error: The 'memory' setting for container is greater than for the task. and in the plan logs I can see that memory = 512, but I never explicitly set that in my TF files.

Victor Grenu avatar
Victor Grenu

Folks, please find below my collection of AWS Twitter bots:

  1. :robot_face: MAMIP: Tweet every time a new or updated AWS IAM Managed Policy is detected, with associated git repository for history of policies. ◦ https://twitter.com/mamip_aws
  2. :mag: MASE: Tweet every time a new AWS service or AWS region endpoint is detected on botocore GitHub repository ◦ https://twitter.com/mase_aws Cheers, zoph

2021-12-22

Sean Holmes avatar
Sean Holmes

Does anyone use boto3 in their CI/CD pipelines?

venkata.mutyala avatar
venkata.mutyala

I use whatever it takes to get it done. :)

venkata.mutyala avatar
venkata.mutyala

Boto3 just calls the AWS APi.

Sean Holmes avatar
Sean Holmes

Is the general consensus that terraform, CDK, and other abstractions are more adoptable for enterprise? boto3 is often used to make your own tools that do the same thing as those more polished alternatives these days right?

steenhoven avatar
steenhoven

For me its last resort in a pipeline. Boto is more for lambdas and stuff

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great discussion point for #office-hours

Sean Holmes avatar
Sean Holmes

Ya it seems like a lower level API access than terraform for sure, so you may have a specific scenario where you want to customize and create your own tool using lower level API calls like boto3. I think in general you want to terraform or CDK like you would cut the majority of a big yard with a riding lawn mower, not a weed whacker. There are a few edge cases around the side of your house or a tree where the weed whacker is more beneficial though.

You are spot on that if you are already building lambdas with python, boto3 makes it very easy to integrate other AWS services and API calls to them in the existing python framework.

It seems like another nice tool to have in your toolbox if you are working with AWS a lot. You can query existing infra and do analysis as well to validate conditionals before passing a build perhaps.

2021-12-23

2021-12-27

Antarr Byrd avatar
Antarr Byrd

I’m getting an error saying sandbox-uw-questions already exists in stack when deploying my CloudFormation Template. I even get this error after going in and deleting the bucket before deploying in the console.

AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Parameters:
  Environment:
    Type: String
  S3BucketName:
    Type: String
Resources:
  QuestionsBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref S3BucketName
  QuestionsFunction:
    Type: AWS::Serverless::Function
    Properties:
      Runtime: nodejs14.x
      Handler: index.handler
      Policies:
        - Version: "2012-10-17"
          Statement:
            - Effect: Allow
              Action:
                - s3:Get*
                - s3:List*
                - s3-object-lambda:Get*
                - s3-object-lambda:List*
              Resource:
                - !Sub "arn:aws:s3:::${S3BucketName}/*"
      InlineCode: !Sub |
        exports.handler = function(event, context) {
            console.log(event);
            return "${Environment}-uw-questions";
          };
      Events:
        S3Bucket:
          Type: S3
          Properties:
            Bucket: !Ref QuestionsBucket
            Events: s3:ObjectCreated:*
RB avatar

check in the aws console if that stack exists

Antarr Byrd avatar
Antarr Byrd

The stack exists. The state is UPDATE_ROLLBACK_COMPLETE

Antarr Byrd avatar
Antarr Byrd

I’ve deleted the bucket

RB avatar

perhaps the s3 bucket is deleted from the console but there is a delay with the api. id just wait 20 minutes and try again

Antarr Byrd avatar
Antarr Byrd

I don’t understand why would it matter. If the bucket exist why would it try to create it again

RB avatar

idk. i dont think cloudformation has a concept of importing the bucket like terraform does

RB avatar

can you try omitting the s3 bucket resource reference and just pass the existing bucket in as a parameter

Antarr Byrd avatar
Antarr Byrd

The bucket is only defined by this template. It didn’t exist before deploying this template. I could define it manually but I have a few dozen environments this may be deployed in

RB avatar

then i would create the bucket in a separate stack and then have this stack take an input of the existing bucket, no?

Antarr Byrd avatar
Antarr Byrd

Don’t see why that would be necessary but I’ll give it a shot.

Rohit S avatar
Rohit S

if sandbox-uw-questions is a s3 bucket, the name has to be unique across all aws and not just in your account. This is s3 saying no to cf calling the s3 CreateBucket api.

    keyboard_arrow_up