#aws (2019-09)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS) Archive: https://archive.sweetops.com/aws/

2019-09-26

imiltchman avatar
imiltchman

@IvanM I don’t think that’s possible

:--1:1

2019-09-25

Nikola Velkovski avatar
Nikola Velkovski

Hi people, anyone having issues with lambdas and setting reserved-concurrent-executions ? it seems that it doesn’t work but there’s no mention on any outages on AWS

Nikola Velkovski avatar
Nikola Velkovski
Nikola Velkovski avatar
Nikola Velkovski

they’ve just published it

IvanM avatar
IvanM

Guys don’t you pls know whether it’s possible to extract host parts in AWS ALB listeners custom redirection?

My use case is that I will have a host [foo.bar.com](http://foo\.bar\.com) and inside listener redirect rule I want to extract foo from #{host}

2019-09-20

rohit avatar
rohit

Does anyone know if there is a way to copy all version of S3 objects from bucket to another S3 bucket ?

maarten avatar
maarten

@rohit have you tried it with the awscli ? aws s3 sync

rohit avatar
rohit

AFAIK aws s3 sync does not copy all the versions

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

according to stackoverflow

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini
Copy S3 Bucket including versions

Is there a way to copy an S3 bucket including the versions of objects? I read that a way to copy a bucket is by using the command line tool with aws s3 sync s3://<source> s3://<dest>

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

There is no direct way to do so . But you can do it via AWS Copy-Object API ref https://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html

Iterate on every version object, capture the object-id and copy to destination bucket.

Hope this helps

maarten avatar
maarten

the version ID’s will get lost but where have you read that it does not copy all versions ?

rohit avatar
rohit

i tried the sync command before posting my question and it did not copy all the versions

maarten avatar
maarten

That’s interesting @rohit , versions missing, or was it only copying exactly one version ?

rohit avatar
rohit

yes it is only copying one version

rohit avatar
rohit

maybe the latest version

maarten avatar
maarten

and have you set versioning to enabled on the destination bucket ?

maarten avatar
maarten

probably, otherwise you wouldn’t see different versions.

rohit avatar
rohit

correct

2019-09-19

pablo avatar
pablo

Hi, anyone know whats the minimum set of permissions needed to run an EMR “jobflow”? This action creates an emr cluster, executes whatever, then terminates the cluster. I’m setting this up from an ec2 instance running Airflow, and I’m reluctant to give it full admin access

davidvasandani avatar
davidvasandani

@pablo you could always give it admin access, run it once, and then check CloudTrail to see what IAM actions it used.

2019-09-16

dalekurt avatar
dalekurt

Has anyone seen or used AWS Landing Zone solution?

oscar avatar
oscar

What is the recommended EBS size for Kafka?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can that be generalized? I think it relates directly to the amount of data you will be storing and the retention

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The number replicas you will have

vluck avatar
vluck

Hiya! Has anyone gotten RDS clusters to work with micro size? (db.t2.micro or db.t3.micro)? Although it’s listed in the documentation, I keep getting

InvalidParameterCombination: RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.t3.micro, Engine=aurora-postgresql, EngineVersion=10.7,
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Aurora Support for db.t3

Aurora MySQL supports the db.t3.medium and db.t3.small instance classes for Aurora MySQL 1.15 and higher, and all Aurora MySQL 2.x versions. These instance classes are available for Aurora MySQL in all Aurora regions except AWS GovCloud (US-West), AWS GovCloud (US-East), and China (Beijing).

Aurora PostgreSQL supports only the db.t3.medium instance class for versions compatible with PostgreSQL 10.7 or later. These instance classes are available for Aurora PostgreSQL in all Aurora regions except China (Ningxia).
vluck avatar
vluck


supports only the db.t3.medium instance class

vluck avatar
vluck

wow

vluck avatar
vluck

then it goes on to say

vluck avatar
vluck


These instance classes are available for Aurora PostgreSQL

vluck avatar
vluck

like, it should say “instance class”, not plural. There is one.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

db.r4.large is the smallest instance type supported by Aurora Postgres 9.6 Postgres 10.6 and later can use db.r5.large Postgres 10.7 and later can use db.t3.medium

vluck avatar
vluck

thanks @Andriy Knysh (Cloud Posse)

2019-09-13

2019-09-12

Abel Luck avatar
Abel Luck

since july the session manager should support ssh tunnels

Abel Luck avatar
Abel Luck

has anyone actually got it working?

asmito avatar
asmito

for me no, please share your experience with it, when you experiment it

Abel Luck avatar
Abel Luck

well first, i discovered that the ssm-agent wasn’t up to date on my ubuntu and debian hosts, because they don’t release the snap package to the stable channel often

Abel Luck avatar
Abel Luck

so if you’re running the ssm-agent via snap, then you need to switch to the candidates channel

Abel Luck avatar
Abel Luck
Snap packages update? · Issue #196 · aws/amazon-ssm-agent

Hello, Is this possible to get last version 2.3.672.0 in snap repository? Last published version is 2.3.662 : https://snapcraft.io/amazon-ssm-agent Thanks!

Abel Luck avatar
Abel Luck

that’ll get you the latest ssm-agent version

Abel Luck avatar
Abel Luck
Step 7: (Optional) Enable SSH Connections Through Session Manager - AWS Systems Manager

You can enable users in your AWS account to use the AWS CLI to establish Secure Shell (SSH) connections to instances using Session Manager. Users who connect using SSH can also copy files between their local machines and managed instances using Secure Copy Protocol (SCP). You can use this functionality to connect to instances without opening inbound ports or maintaining bastion hosts. You can also choose to explicity disable SSH connections to your instances through Session Manager.

Abel Luck avatar
Abel Luck

however they aren’t working for me, when I execute the ssh command I get the error

Abel Luck avatar
Abel Luck
debug1: ssh_exchange_identification: ----------ERROR-------

debug1: ssh_exchange_identification: Encountered error while initiating handshake. SessionType failed on client with status 2 error: Failed to process action SessionType: Unknown session type Port
asmito avatar
asmito

thanks for sharing your experience will check it with you

Abel Luck avatar
Abel Luck

Got it working

Abel Luck avatar
Abel Luck

My local session manager plugin was out of date

Abel Luck avatar
Abel Luck

so make sure you update the ssm-agent on the ec2 instance, and also your local session manager plugin

Abel Luck avatar
Abel Luck

you need session manager plugin version 1.1.23.0 or later, and on the ec2 instance amazon-ssm-agent 2.3.672.0 or later

asmito avatar
asmito

Cool, congrats then

Abel Luck avatar
Abel Luck

“SessionType failed on client” was the clue

asmito avatar
asmito

Team, Question regarding Costs please :

  • i have created a CNAME record in cloudflare that points to an Internal Load balancer of Nginx, the record will be used just inside the vpc the same region different AZs, normally i will be charged for 0.01/GB of data transfer + load balancing costs, but using cloudflare, does the traffic will be out the vpc to cloudflare and in again ?
maarten avatar
maarten

Resolving is not VPC traffic, but apart from that:

1 cent for approximately 2 billion dns requests is not something to worry about. If you would do that amount of http requests on a daily basis you have other costs to worry about .

:100:1
Phuc avatar

Hi Guys, anyone deal with SSM module in terraform. we have a case that we want to use that TF module to read the content in the file and create a list of key-value pairs to put on AWS SSM. Kinda long list in this format: ``` VARA=”value” VARB=”value1” .

PePe avatar

we are using it

PePe avatar

what is your question ?

2019-09-10

mk avatar

hi

mk avatar

Anyone he experienced with setting up an health check between AWS and Aliyun?

2019-09-09

Brij S avatar
Brij S

for python lambda packages; do you need to include os and json in the deployment package?

antonbabenko avatar
antonbabenko

No, I don’t

Maciek Strömich avatar
Maciek Strömich

only if you use them. os gets handy when you want to read some settings from env variaables

myvar = os.getenv('SOME_VAR', 'default')
Maciek Strömich avatar
Maciek Strömich

json if you’re working with reading/writing json objects

2019-09-07

2019-09-05

joshmyers avatar
joshmyers

Anyone seen it look like a CloudWatch rule triggered - i.e. you can see the rule metrics for “invoked” - where invoke invokes a Lambda function…but checking the lambda function that is invoked, doesn’t show any invocations or logs…? I can “test” the lambda function manually and it works fine…

Maciek Strömich avatar
Maciek Strömich

maybe cloudwatch events doesn’t have ability to trigger lambda?

Maciek Strömich avatar
Maciek Strömich

we had similar issues in the beginning while triggering lambdas behind api-gateway

joshmyers avatar
joshmyers

Also seeing this for a different thing but similar issues whereby Cloudwatch alert > SNS > Lambda , it looks like the thing is working, but Lambda isn’t actually invoked and isn’t showing anything in the logs either… but enabling sns delivery notifications…

joshmyers avatar
joshmyers
joshmyers avatar
joshmyers

SNS thinks it successfully triggered that lambda function…

joshmyers avatar
joshmyers

lambda function thinks otherwise…

joshmyers avatar
joshmyers

Can’t see anything in Cloudtrail and this def has worked in the past..

Maciek Strömich avatar
Maciek Strömich

maybe lambda isn’t able to save logs ;D

joshmyers avatar
joshmyers

It can when I invoke it manually with a test event

joshmyers avatar
joshmyers

and the code hasn’t changed that does this, and it was working yesterday… >_<

joshmyers avatar
joshmyers

Lambda doesn’t even look like it has been invoked looking at the metrics for it…

joshmyers avatar
joshmyers

Also seeing 2 different AWS Elasticsearch clusters 504…while another is OK…all in eu-west-1

joshmyers avatar
joshmyers

nvm, swamped ENIs

Brij S avatar
Brij S

has anyone used github actions with the awscli? The repo has the HCL syntax and im trying it with yml and running into some issues

davidvasandani avatar
davidvasandani

not I but share whatever you find as I plan on jumping on a similar project in the next week or two.

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

don’t use the HCL syntax

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s already deprecated and will stop working very soon

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

use the YAML format

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nvm, i see you’re trying the yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anyways, i can help take alook if you post something

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, here are my notes https://github.com/cloudposse/build-harness/pull/165 (see description)

Add GitHub Actions by osterman · Pull Request #165 · cloudposse/build-harness

what Add action to automatically rebuild readme and push upstream Respond to /readme command. Note: issue_comment workflows use master workflow for security. See https://developer.github.com/actio

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i found those links most helpful when working my first actions

Brij S avatar
Brij S

@Erik Osterman (Cloud Posse) have you been able to get github actions to work when a PR is closed and/or merged?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s a good question! TBH I’ve only tried these simple examples.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(E.g. haven’t tried to deploy on merge with github actions)

Marcio Rodrigues avatar
Marcio Rodrigues

Hello, i’m curious about how you guys are doing disaster recover tasks in your company

Marcio Rodrigues avatar
Marcio Rodrigues

Do you guys fully automate it? Run via CI? Do heavy tasks with terraform? How is your plan

Marcio Rodrigues avatar
Marcio Rodrigues

ps: by limited money, i am tasked to not keep a full replicated environment in another AWS region, but i should have a plan to recreate my infrastructure in another region if needed

2019-09-04

Shannon Dunn avatar
Shannon Dunn

for anyone with using OU and SCPs with lots of accounts inside, how do you manage change of the SCPs and the OU structure it self

Shannon Dunn avatar
Shannon Dunn

is it wise to replicate the entirety of the OU structure into a dev,qa,prod, each with its own root etc…

Shannon Dunn avatar
Shannon Dunn

especially for hub and spoke modules, things like a logging,transit, shared services, would get their own dev/qa/prod as well

Shannon Dunn avatar
Shannon Dunn

or is anyone using IAC to manage SCPs and OU

curious deviant avatar
curious deviant

Hey Shannon.. I just started a spike around AWS Control Tower (CT) for multi-account governance and one of the first questions that came to my mind was similar to yours. I think a lot depends on how we organize our OUs. So far I can only speak to the SCP change …How I think I’ll go about it is I’ll create 3 OU’s : DEV , QA and PROD say .. and SCPs changes would be rolled out first to DEV and then higher environments. So basically I would not replicate the OU structure but create separate OUs and move changes through them. This will give me an opportunity to identify and address any breaking changes. For my particular case (using CT), IAC is not an option since CT doesn’t expose any APIs yet..everything is pretty much manual/done by CT (logging bucket creation etc). I will be spending sometime on CT further.

curious deviant avatar
curious deviant

Let me know if you have more questions/suggestions.. will help me plan my OU structure better

2019-09-03

Maciek Strömich avatar
Maciek Strömich
Announcing improved VPC networking for AWS Lambda functions | Amazon Web Services attachment image

We’re excited to announce a major improvement to how AWS Lambda functions work with your Amazon VPC networks. With today’s launch, you will see dramatic improvements to function startup performance and more efficient usage of elastic network interfaces. These improvements are rolling out to all existing and new VPC functions, at no additional cost. Roll […]

1
:--1:2
davidvasandani avatar
davidvasandani

This is aws ome!!

Announcing improved VPC networking for AWS Lambda functions | Amazon Web Services attachment image

We’re excited to announce a major improvement to how AWS Lambda functions work with your Amazon VPC networks. With today’s launch, you will see dramatic improvements to function startup performance and more efficient usage of elastic network interfaces. These improvements are rolling out to all existing and new VPC functions, at no additional cost. Roll […]

Alejandro Rivera avatar
Alejandro Rivera

Is there a way to use wildcards on s3 bucket policies on Principals ? e.g.: we have:

arn:aws:iam::999999999:role/role-name123456789

would like to do something like:

arn:aws:iam::999999999:role/role-name*
Andy avatar

@Alejandro Rivera Take a look at this: https://stackoverflow.com/a/56678945/10846194 Ref: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html You cannot use it as a wildcard to match part of a name or an ARN. We also strongly recommend that you do not use a wildcard in the Principal element in a role's trust policy unless you otherwise restrict access through a Condition element in the policy. Otherwise, any IAM user in any account can access the role.

Wildcard at end of principal for s3 bucket

I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create mo…

1
Alejandro Rivera avatar
Alejandro Rivera

Thank you sir!

Wildcard at end of principal for s3 bucket

I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create mo…

:--1:1

2019-09-01

    keyboard_arrow_up