#aws (2019-09)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2019-09-01
2019-09-03
We’re excited to announce a major improvement to how AWS Lambda functions work with your Amazon VPC networks. With today’s launch, you will see dramatic improvements to function startup performance and more efficient usage of elastic network interfaces. These improvements are rolling out to all existing and new VPC functions, at no additional cost. Roll […]
This is ome!!
We’re excited to announce a major improvement to how AWS Lambda functions work with your Amazon VPC networks. With today’s launch, you will see dramatic improvements to function startup performance and more efficient usage of elastic network interfaces. These improvements are rolling out to all existing and new VPC functions, at no additional cost. Roll […]
Is there a way to use wildcards on s3 bucket policies on Principals ? e.g.: we have:
arn:aws:iam::999999999:role/role-name123456789
would like to do something like:
arn:aws:iam::999999999:role/role-name*
@Alejandro Rivera Take a look at this:
https://stackoverflow.com/a/56678945/10846194
Ref: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
You cannot use it as a wildcard to match part of a name or an ARN. We also strongly recommend that you do not use a wildcard in the Principal element in a role's trust policy unless you otherwise restrict access through a Condition element in the policy. Otherwise, any IAM user in any account can access the role.
I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create mo…
Thank you sir!
I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create mo…
2019-09-04
for anyone with using OU and SCPs with lots of accounts inside, how do you manage change of the SCPs and the OU structure it self
is it wise to replicate the entirety of the OU structure into a dev,qa,prod, each with its own root etc…
especially for hub and spoke modules, things like a logging,transit, shared services, would get their own dev/qa/prod as well
or is anyone using IAC to manage SCPs and OU
Hey Shannon.. I just started a spike around AWS Control Tower (CT) for multi-account governance and one of the first questions that came to my mind was similar to yours. I think a lot depends on how we organize our OUs. So far I can only speak to the SCP change …How I think I’ll go about it is I’ll create 3 OU’s : DEV , QA and PROD say .. and SCPs changes would be rolled out first to DEV and then higher environments. So basically I would not replicate the OU structure but create separate OUs and move changes through them. This will give me an opportunity to identify and address any breaking changes. For my particular case (using CT), IAC is not an option since CT doesn’t expose any APIs yet..everything is pretty much manual/done by CT (logging bucket creation etc). I will be spending sometime on CT further.
Let me know if you have more questions/suggestions.. will help me plan my OU structure better
2019-09-05
Anyone seen it look like a CloudWatch rule triggered - i.e. you can see the rule metrics for “invoked” - where invoke invokes a Lambda function…but checking the lambda function that is invoked, doesn’t show any invocations or logs…? I can “test” the lambda function manually and it works fine…
maybe cloudwatch events doesn’t have ability to trigger lambda?
we had similar issues in the beginning while triggering lambdas behind api-gateway
Also seeing this for a different thing but similar issues whereby Cloudwatch alert > SNS > Lambda , it looks like the thing is working, but Lambda isn’t actually invoked and isn’t showing anything in the logs either… but enabling sns delivery notifications…
SNS thinks it successfully triggered that lambda function…
lambda function thinks otherwise…
Can’t see anything in Cloudtrail and this def has worked in the past..
maybe lambda isn’t able to save logs ;D
It can when I invoke it manually with a test event
and the code hasn’t changed that does this, and it was working yesterday… >_<
Lambda doesn’t even look like it has been invoked looking at the metrics for it…
Also seeing 2 different AWS Elasticsearch clusters 504…while another is OK…all in eu-west-1
nvm, swamped ENIs
has anyone used github actions with the awscli? The repo has the HCL syntax and im trying it with yml and running into some issues
not I but share whatever you find as I plan on jumping on a similar project in the next week or two.
don’t use the HCL syntax
that’s already deprecated and will stop working very soon
use the YAML format
nvm, i see you’re trying the yaml
anyways, i can help take alook if you post something
fwiw, here are my notes https://github.com/cloudposse/build-harness/pull/165 (see description)
what Add action to automatically rebuild readme and push upstream Respond to /readme command. Note: issue_comment workflows use master workflow for security. See https://developer.github.com/actio…
i found those links most helpful when working my first actions
@Erik Osterman (Cloud Posse) have you been able to get github actions to work when a PR is closed and/or merged?
That’s a good question! TBH I’ve only tried these simple examples.
(E.g. haven’t tried to deploy on merge with github actions)
Hello, i’m curious about how you guys are doing disaster recover tasks in your company
Do you guys fully automate it? Run via CI? Do heavy tasks with terraform? How is your plan
ps: by limited money, i am tasked to not keep a full replicated environment in another AWS region, but i should have a plan to recreate my infrastructure in another region if needed
2019-09-07
2019-09-09
for python lambda packages; do you need to include os
and json
in the deployment package?
No, I don’t
only if you use them. os gets handy when you want to read some settings from env variaables
myvar = os.getenv('SOME_VAR', 'default')
json
if you’re working with reading/writing json objects
2019-09-10
hi
Anyone he experienced with setting up an health check between AWS and Aliyun?
2019-09-12
since july the session manager should support ssh tunnels
has anyone actually got it working?
for me no, please share your experience with it, when you experiment it
well first, i discovered that the ssm-agent wasn’t up to date on my ubuntu and debian hosts, because they don’t release the snap package to the stable channel often
so if you’re running the ssm-agent via snap, then you need to switch to the candidates channel
Hello, Is this possible to get last version 2.3.672.0 in snap repository? Last published version is 2.3.662 : https://snapcraft.io/amazon-ssm-agent Thanks!
that’ll get you the latest ssm-agent version
the docs regarding the new feature are https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html
You can enable users in your AWS account to use the AWS CLI to establish Secure Shell (SSH) connections to instances using Session Manager. Users who connect using SSH can also copy files between their local machines and managed instances using Secure Copy Protocol (SCP). You can use this functionality to connect to instances without opening inbound ports or maintaining bastion hosts. You can also choose to explicity disable SSH connections to your instances through Session Manager.
however they aren’t working for me, when I execute the ssh command I get the error
debug1: ssh_exchange_identification: ----------ERROR-------
debug1: ssh_exchange_identification: Encountered error while initiating handshake. SessionType failed on client with status 2 error: Failed to process action SessionType: Unknown session type Port
thanks for sharing your experience will check it with you
Got it working
My local session manager plugin was out of date
so make sure you update the ssm-agent on the ec2 instance, and also your local session manager plugin
you need session manager plugin version 1.1.23.0 or later, and on the ec2 instance amazon-ssm-agent 2.3.672.0 or later
Cool, congrats then
“SessionType failed on client” was the clue
Team, Question regarding Costs please :
- i have created a CNAME record in cloudflare that points to an Internal Load balancer of Nginx, the record will be used just inside the vpc the same region different AZs, normally i will be charged for 0.01/GB of data transfer + load balancing costs, but using cloudflare, does the traffic will be out the vpc to cloudflare and in again ?
Resolving is not VPC traffic, but apart from that:
1 cent for approximately 2 billion dns requests is not something to worry about. If you would do that amount of http requests on a daily basis you have other costs to worry about .
Anyone done multi region EKS?
Hi Guys, anyone deal with SSM module in terraform. we have a case that we want to use that TF module to read the content in the file and create a list of key-value pairs to put on AWS SSM. Kinda long list in this format: ``` VARA=”value” VARB=”value1” .
we are using it
what is your question ?
2019-09-13
2019-09-16
Has anyone seen or used AWS Landing Zone solution?
What is the recommended EBS size for Kafka?
Can that be generalized? I think it relates directly to the amount of data you will be storing and the retention
The number replicas you will have
Hiya! Has anyone gotten RDS clusters to work with micro size? (db.t2.micro or db.t3.micro)? Although it’s listed in the documentation, I keep getting
InvalidParameterCombination: RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.t3.micro, Engine=aurora-postgresql, EngineVersion=10.7,
Aurora Support for db.t3
Aurora MySQL supports the db.t3.medium and db.t3.small instance classes for Aurora MySQL 1.15 and higher, and all Aurora MySQL 2.x versions. These instance classes are available for Aurora MySQL in all Aurora regions except AWS GovCloud (US-West), AWS GovCloud (US-East), and China (Beijing).
Aurora PostgreSQL supports only the db.t3.medium instance class for versions compatible with PostgreSQL 10.7 or later. These instance classes are available for Aurora PostgreSQL in all Aurora regions except China (Ningxia).
supports only the db.t3.medium instance class
wow
then it goes on to say
These instance classes are available for Aurora PostgreSQL
like, it should say “instance class”, not plural. There is one.
db.r4.large is the smallest instance type supported by Aurora Postgres 9.6 Postgres 10.6 and later can use db.r5.large Postgres 10.7 and later can use db.t3.medium
thanks @Andriy Knysh (Cloud Posse)
2019-09-19
Hi, anyone know whats the minimum set of permissions needed to run an EMR “jobflow”? This action creates an emr cluster, executes whatever, then terminates the cluster. I’m setting this up from an ec2 instance running Airflow, and I’m reluctant to give it full admin access
@pablo you could always give it admin access, run it once, and then check CloudTrail to see what IAM actions it used.
2019-09-20
Does anyone know if there is a way to copy all version of S3 objects from bucket to another S3 bucket ?
@rohit have you tried it with the awscli ?
aws s3 sync
AFAIK aws s3 sync
does not copy all the versions
according to stackoverflow
Is there a way to copy an S3 bucket including the versions of objects? I read that a way to copy a bucket is by using the command line tool with aws s3 sync s3://<source> s3://<dest>
There is no direct way to do so . But you can do it via AWS Copy-Object API ref https://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html
Iterate on every version object, capture the object-id and copy to destination bucket.
Hope this helps
the version ID’s will get lost but where have you read that it does not copy all versions ?
i tried the sync command before posting my question and it did not copy all the versions
That’s interesting @rohit , versions missing, or was it only copying exactly one version ?
yes it is only copying one version
maybe the latest version
and have you set versioning to enabled on the destination bucket ?
probably, otherwise you wouldn’t see different versions.
correct
2019-09-25
Hi people, anyone having issues with lambdas and setting reserved-concurrent-executions
?
it seems that it doesn’t work but there’s no mention on any outages on AWS
they’ve just published it
Guys don’t you pls know whether it’s possible to extract host parts in AWS ALB listeners custom redirection?
My use case is that I will have a host [foo.bar.com](http://foo.bar.com)
and inside listener redirect rule I want to extract foo
from #{host}