#aws (2024-07)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2024-07-01

2024-07-02

2024-07-04

Dhamodharan avatar
Dhamodharan

Hi All, Seeking suggestions for a AWS POC,

Setting up a small AWS POC, planning to setup 1uat machine 1 prod machine and 1 Jenkins machine to build and deploy to both uat and prod.

To ensure the security, planning to go with aws organisation and keep 3accounts to keep all 3 servers. Is it good approach or any other approach to set it up? interms of security and cost effective.

Thanks in advance.

theherk avatar
theherk

An account per machine, seems like more overhead than required for a POC, but in general this seems like a good separation. Using jenkins seems like a bummer.

1
jenkins_ci1
Dhamodharan avatar
Dhamodharan

We may be moving the same setup to live, so I am thinking this way..

Dhamodharan avatar
Dhamodharan

Also not sure about the costing with AWS, Is the aws accounts costs extra?

theherk avatar
theherk

No. You pay per resource used.

theherk avatar
theherk

Even organizations doesn’t cost extra, just the resources within the accounts attached to the organization.

Dhamodharan avatar
Dhamodharan

thanks for the info @theherk, I will implement the same approach then…

managedkaos avatar
managedkaos

For a POC… multiple accounts would be overkill.

However if your POC is to demonstrate account separation for a larger project, then yes, go for it. The org and the accounts are free.

I would think your breakout would be:

  1. Production account for all production resources
  2. UAT account for all non-production resources
  3. Deployment account for automation. one thing that would be really great to acheive with this set up is only allowing deployments into Production or UAT via the services in the deployment account. That is, no manual changes unless absolutely necessary.

Using the UAT account resources as a deployment target, you would also realize all you would need to do to allow access to the production account resources — VPCs, Security Groups, Systems Manager connections, etc — from the deployment account.

However, if your POC is to only demonstrate deploying from Jenkins into two “environments” (not accounts) then the multi-account approach is overkill.

Dhamodharan avatar
Dhamodharan

hi @managedkaos thanks for the response, We would move the same setup to live if everything is good. So i thought this approach in that longrun. By keeping that in mind, hope this approach is good? Or you are suggesting some other option?

managedkaos avatar
managedkaos

Your approach is good, indeed! Not suggesting another option.

2024-07-05

Sairam avatar

Hi Everyone, need help in python runtime upgrade in aws lambda, I have deployed datadog as aws lambda application with python runtime as 3.7 a while ago. Have a lot of env vars in it. How do we upgrade the application with python 3.11 runtime. thanks in advance.

I did try by just manually upgrading the lambda function runtime to python3.11 but it breaks

theherk avatar
theherk

When you say it breaks, what do you mean. It might be that your code needs some changes to work with 3.11.

Sairam avatar

thanks for the reply. I get the below error. I used https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring#<i class="em em-~~~"https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring#:~~~ext=tabs%20%3E%7D%7D%20%[…]dFormation,-Log%20into%20your for installation.

[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': cannot import name 'formatargspec' from 'inspect' (/var/lang/lib/python3.11/inspect.py)
Traceback (most recent call last):
theherk avatar
theherk

See What’s new in Python 3.11. With respect to formatargspec:

The formatargspec() function, deprecated since Python 3.5; use the inspect.signature() function or the inspect.Signature object directly. And since that guide you shared says to use Python 3.10, perhaps you should. That would be before a feature it is using was removed. And once it imports it will pass, and maybe (probably) succeed at importing lambda_function which I presume is your entry point.

What’s New In Python 3.11

Editor, Pablo Galindo Salgado,. This article explains the new features in Python 3.11, compared to 3.10. Python 3.11 was released on October 24, 2022. For full details, see the changelog. Summary –…

theherk avatar
theherk

I just stumbled across that tab again and noticed it list varying runtime requirements based on the version you’re running. So while it says “Create a Python 3.10 Lambda function”, the version here is actually based upon version. So if you run version Upgrade an older version to +3.107.0 it would support Python 3.11, meaning it probably won’t try to import formatargspec.

Sairam avatar

Thanks, even I think upgrading the datadog application itself helps rather than i only upgrade the python runtime alone.

Sairam avatar

I will keep you posted

Sairam avatar

Hi, Post upgrade of the datadog application, I get this error

[ERROR]	2024-07-11T15:51:44.602Z	2d1a12b4-8337-4fec-af18-7aebda4d3a58	[dd.trace_id=597347809750705622 dd.span_id=5039695315866320206]	Failed to get log group tags from cache
Traceback (most recent call last):
  File "/opt/python/caching/cloudwatch_log_group_cache.py", line 125, in _get_log_group_tags_from_cache
    response = self.s3_client.get_object(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lang/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
    return self._make_api_call(operation_name, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lang/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.

Please suggest what is it? Thanks in advance

theherk avatar
theherk

Looks like you’re going to need to troubleshoot why that key isn’t there or why your lambda can’t see it.

Sairam avatar

This is part of the baseline code of the Datadog Forwarder.

According to that method, it handle that exception. Im not sure how to add the key… before upgrading there was no issue

            response = self.s3_client.get_object(

2024-07-08

2024-07-11

2024-07-15

Prasad avatar

I have a ALB in a source acct routing to a NLB in a target account at the moment …we have a use case to have Private Link setup from another source account …can the endpont link be setup with the same NLB in target account by creating endpoint service.. i want both routes to work

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

2024-07-19

Sean Turner avatar
Sean Turner

Hey all, curious what you all think.

Jupyterhub Notebooks on EKS has a worst case scenario cold start where a Data Scientist needs to wait for a Node to spin up and for the large Docker Image to pull.

The thinking is that we can largely eliminate (or at least reduce) the Docker Image pull time by creating AMIs with the Docker Image on them (with Image Builder pulled as ec2-user). Jupyterhub would then launch workloads (notebook servers) onto these AMIs as Nodes managed by Karpenter with Taints/Tolerations and Node Affinity.

However, it seems like ec2-user and the kubelet (or containerd?) have different docker storage (there’s only one EBS volume attached). This is causing EKS to pull images that should already be available to it because the image was previously pulled by ec2-user.

Running a docker images command run on the node (via SSH as ec2-user) shows a couple our latest tag which was pulled while building the AMI. Launching a Notebook with a specific tag “foo” caused a docker pull to occur. When it was finished, running docker images via SSH again did not show foo in the output.

Conversely, pulling a different tag bar as ec2-user and then launching a Notebook Server with bar caused EKS to pull the Image again.

Any ideas?

Sean Turner avatar
Sean Turner

Interesting, looks like the Images are in the output of ctr -n [k8s.io](http://k8s.io) images list. Seems like I’ll need to get Image Builder to pull my image to that namespace with ctr

Sean Turner avatar
Sean Turner

This is the solution I came up with. Haven’t tested it yet (as in launched a notebook) but I think it works (it’s pulling my image successfully to the same namespace that EKS uses)

phases:
  - name: build
    steps:
      - name: pull-machine-prospector
        action: ExecuteBash
        inputs:
          commands:
            - password=$(aws ecr get-login-password --region us-west-2)
              # Redirecting stdout because the process creates thousands of log lines.
            - sudo ctr --namespace k8s.io images pull --user AWS:$password acct.dkr.ecr.us-west-2.amazonaws.com/app:latest > /dev/null
            - sudo ctr --namespace k8s.io images list
  - name: test
    steps:
      - name: confirm-image-pulled
        action: ExecuteBash
        inputs:
          commands:
            - set -e
            - sudo ctr --namespace k8s.io images list | grep app
Sean Turner avatar
Sean Turner

Didn’t seem to work, image still needed to pull.

    keyboard_arrow_up