#aws (2021-10)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2021-10-01
anyone deployed AWS Inspector for EC2 instances in a private subnet?
RDS Event Subscriptions: I’m trying to send RDS event subscriptions to an SNS topic, then have a lambda send those as a webhook to MS Teams. but following the AWS blog post about SNS to teams webhooks, the message I see in my teams channel is:
{"Event Source":"db-snapshot","Event Time":"2021-10-01 02:09:04.371","Identifier Link":"<https://console.aws.amazon.com/rds/home?region=us-west-2#snapshot:id=rds:dev-mpa-spa-db-01-2021-10-01-02-09>","Source ID":"rds:dev-mpa-spa-db-01-2021-10-01-02-09","Source ARN":"arn:aws:rds:us-west-2:730458288754:snapshot:rds:dev-mpa-spa-db-01-2021-10-01-02-09","Event ID":"<http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html#RDS-EVENT-0090>","Event Message":"Creating automated snapshot"}
.. I want to customize this to something more readable. The python code in the lambda looks like below, and it’s basically blindly sending out the contents of the Message object. Anybody customized their events notifications before?
def lambda_handler(event, context):
url = "<https://outlook.office.com/webhook/xxxxxxx>"
msg = {
"text": event['Records'][0]['Sns']['Message']
}
encoded_msg = json.dumps(msg).encode('utf-8')
resp = http.request('POST',url, body=encoded_msg)
print({
"message": event['Records'][0]['Sns']['Message'],
"status_code": resp.status,
"response": resp.data
})
2021-10-04
How can I test connectivity for created kafka service? It seems apps cannot use it Here is the config I used to create AWS MSK
module "kafka" {
source = "../../external_modules/cloudposse/terraform-aws-msk-apache-kafka-cluster"
# version = "0.6.3"
namespace = "testnamesapce"
stage = "dev"
name = "msk"
vpc_id = module.vpc.vpc_id
security_groups = ["sg-XXXXXXXXXXXX", "sg-XXXXXXXXXXXX"]
subnet_ids = ["subnet-XXXXXXXXXXXX", "subnet-XXXXXXXXXXXX"]
kafka_version = "2.8.0"
number_of_broker_nodes = 2 # this has to be a multiple of the # of subnet_ids
broker_instance_type = "kafka.t3.small"
broker_volume_size = "100"
}
When I use netcat from EKS pods, I can reach Zookeeper nodes Error log from apps
2021-10-04 10:10:13 WARN o.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Bootstrap broker z-2.dev.mjd92j.c17.kafka.us-east-1.amazonaws.com:2182 (id: -1 rack: null) disconnected
2021-10-04 10:10:13 WARN o.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Bootstrap broker z-3.dev.mjd92j.c17.kafka.us-east-1.amazonaws.com:2182 (id: -2 rack: null) disconnected
2021-10-04 10:10:14 WARN o.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Bootstrap broker z-1.dev.mjd92j.c17.kafka.us-east-1.amazonaws.com:2182 (id: -3 rack: null) disconnected
./kafka-topics.sh --create --bootstrap-server z-2.msk.xxxxxxx.c17.kafka.us-east-1.amazonaws.com:2181 --create --topic test-topic --partitions 3 --replication-factor 3 --if-not-exists
Error while executing topic command : Timed out waiting for a node assignment. Call: createTopics
[2021-10-04 11:55:27,380] ERROR org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopics
(kafka.admin.TopicCommand$)
looks like either your security groups or route tables aren’t allowing traffic
nc -vz z-2.msk.xxxxxxx.c17.kafka.us-east-1.amazonaws.com 2181
Connection to z-2.msk.xxxxxxx.c17.kafka.us-east-1.amazonaws.com 2181 port [tcp/*] succeeded!
network looks fine
yeah it does
well…silly question…are you running the netcat from outside in or from an instance within the VPC?
any chance the IAM policy associated doesn’t have the appropriate perms?
honestly though I’m just throwing out random guesses in hopes that I help you stumble upon something
yep, checking everything, just wondering what might be the root cause as in overall it looks fine
typically for me when I’ve seen unexpected timeouts it’s either networking (usually security groups) or IAM…usually with the latter though it’s not perms but rather an expired session
the problem was that by default TLS in transit was used which didn’t allowed to troubleshoot and etc… changed TLS -> TLS_PLAINTEXT and issue solved (we don’t use certificates so far)
thanks for sharing the fix!
Hi all, hope you are doing well.. currently I am using sticky session for caching the user session..for a wordpress site.. but now I am facing some performance issues with that and I am planning to move to elasticcache redis how to achieve this ?? Any reference would be really helpful.. or any basic idea
Does anyone know if there is a cloudposse terraform package for cicd on aws that uses code commit? we have a requirement for code commit, but all the packages seem to rely on github. Thanks for any help.
Hey Eric,
This is my personal solution - https://github.com/msharma24/multi-env-aws-terraform and I think it can easily work in any CICD tool.
Multi environment AWS Terraform demo. Contribute to msharma24/multi-env-aws-terraform development by creating an account on GitHub.
i’m partial to this one… https://github.com/plus3it/terraform-aws-codecommit-flow-ci
Implement an event-based CI workflow on a CodeCommit repository - GitHub - plus3it/terraform-aws-codecommit-flow-ci: Implement an event-based CI workflow on a CodeCommit repository
2021-10-06
Is it possible to set these prarameters for AWS MSK(Kafka)
KAFKA_ADVERTISED_LISTENERS=<PLAINTEXT://kafka-server:9092>,PLAINTEXT_<HOST://localhost:29092>
Faced with this error but for AWS MSK https://stackoverflow.com/questions/35788697/leader-not-available-kafka-in-console-producer
I am trying to use Kafka. All configurations are done properly but when I try to produce message from console I keep getting the following error WARN Error while fetching metadata with correlation…
solved by adding auto.create.topics.enable=true
thank you for sharing the solution. I see a few people mentioned auto creating but it doesn’t look like anybody explicitly used that – might want to share it with the community on that SO post.
Good morning. I am not able to get AWS codepipeline to work with the “cloudposse/ecs-codepipeline/aws” module. I get an error:
Error: POST <https://api.github.com/repos/><name><app>/hooks: 404 Not Found []
│
│ with module.ecs_push_pipeline.module.github_webhooks.github_repository_webhook.default[0],
│ on .terraform/modules/ecs_push_pipeline.github_webhooks/main.tf line 7, in resource "github_repository_webhook" "default":
│ 7: resource "github_repository_webhook" "default" {
Here is the main.tf for the pipeline:
module "ecs_push_pipeline" {
source = "cloudposse/ecs-codepipeline/aws"
version = "0.28.1"
name = var.name
namespace = var.namespace
stage = "stage"
image_repo_name = var.imgRepoName
region = var.aws_region
github_oauth_token = "<secure_token_from_github_org_oauth_creation>"
github_webhooks_token = "<github_repo_webhook_secret>"
repo_owner = var.owner
repo_name = var.repo
branch = "master"
service_name = "test-app-service"
ecs_cluster_name = "${var.name}-ecs-cluster"
privileged_mode = true
cache_bucket_suffix_enabled = false # important: see <https://github.com/cloudposse/terraform-aws-codebuild/issues/91>
}
we have to use the deprecated but still supported aws oauth access instead of codestar connect is not an option as we are multi-region and it is not supported in ap or eu. Any help greatly appreciated.
I’m seeing this same error with the same module setup. Were you able to resolve it?
For anyone who runs across this error in the future, it’s a permissions problem. See here for more.
2021-10-09
Hai Everyone. I have a problem with cloudformation. The current status is update_rollback_failed for ECS Service. I want to update parameter which takes docker image to latest. how can I do that as I’m unable to create change set and update the template as well. Any help would be great for me
2021-10-11
This message was deleted.
Hey everyone, so I have a simple question that I’m probably searching poorly for an answer for, but can’t find. So even just posting a link to an article about it is enough for me .
How do you request an internal ELB from say a node application?
For reference what I’m trying to do: My company has always worked in a monolith architecture, which is becoming a huge pain for us, so we want to split off some stuff into micro services. I’m trying to start with the most basic setup possible that allows us to easily add on to/move to better architecture in the future. We run a grails (Groovy/Java based back end) on elastic beanstalk instances. I want to launch the first service in a few EC2 instances (1 per environment) to start.
What I’m trying to figure out right now is the routing to keep the request from having to go out to the internet and back in to the ELB we are currently using. I know I can use a new ELB that’s internal to route requests dynamically based on URI so that we don’t have to hard code IPs/change per environment. Is there a specific internal IP/url the load balancer is always launched to? Or how can I consistently request it from a grails/node application. Am I overthinking it?
Unless you are going to be sending terabytes of data, don’t worry about “routing to keep the request from having to go out to the internet”
@Alex Jurkiewicz it’s more so that this is an auth service that has zero need to even be exposed to the public
Each ALB (you said ELB but I assume you aren’t using that old school thing) has a hostname, this hostname is fixed for the life of the LB.
You can CNAME a prettier name to that. For example [my-service.company.com](http://my-service.company.com)
CNAMEs to [alb-1234567890abcdef.amazonaws.com](http://alb-1234567890abcdef.amazonaws.com)
.
Then, add a security group to the load balancer which states “only allow incoming requests on tcp/443 from security group app-servers”
The security group will provide protection against public exposure. And you can also deploy the ALB into a private subnet, so it has no routable public address people can connect to.
just adding that when you create your ALB, configure it as internal
:
https://i.stack.imgur.com/uenlO.png
Setup We have an ECS cluster with 2 services (called portal-ECS-service and graph-ECS-service). Each have an ALB (portal-ALB and graph-ALB respectively). The setup is this: End user <-> porta…
2021-10-12
Anyone noticing issues logging into AWS using the built-in SSO? Just started?
Returning 504.
Its down
Real-time AWS (Amazon Web Services) status. Is AWS down or suffering an outages? Here you see what is going on.
AWS Dashboard seems to be offline for the last ~30 minutes.
Seems to be limited to us-east, I am able to login specifically to canada region
This is why you deploy SSO in any other region
is it us-east-1 or all of us-east!?
2021-10-13
*New customers can access two Availability Zones in US West (Northern California).
is it a support ticket to get this unlocked
Nope.
$ aws ec2 describe-availability-zones
{
"AvailabilityZones": [
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1b",
"ZoneId": "usw1-az3",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
},
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1c",
"ZoneId": "usw1-az1",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
}
]
}
thanks @bradym
Does anyone have experience implementing centrailized security logging in AWS? I’ve created an Organization level cloud trail, but I need to clean up the resources in the sub accounts.
2021-10-14
has anyone seen this before … https://twitter.com/swade1987/status/1448584648771133441?s=20
Private Hosted Zone? If so, then yes.
same but its a public zone, which confuses me
2021-10-16
Hi everyone, I seen big swings in the creation times of RDS MYSQL being created from a snapshot. Sometimes it’s 30min, other times more than 2h in which case terraform times out. Any of you had similar experience?
yup - depends on the size of the snapshot, the size of the instance you’re restoring and the queue/availability in that AZ…which TMK is a black hole
thanks. it’s most likely the black hole because everything else stays the same
2021-10-17
Hello everyone! I’m looking at this module to create a website and host it in AWS S3
. I see that it stores all the traffic (and what not) logs also in another S3
bucket. I wonder how do people usually go about analyzing (mostly reviewing) them :thinking_face:. Is it possible to store/send them to CloudWatch
also/instead? So far, I can only download them to see what’s going on, but I’m sure that’s far from the trend
@x80486 This would be a good starting point: https://aws.amazon.com/blogs/big-data/analyzing-amazon-s3-server-access-logs-using-amazon-es/
September 8, 2021: Amazon Elasticsearch Service has been renamed to Amazon OpenSearch Service. See details. When you use Amazon Simple Storage Service (Amazon S3) to store corporate data and host websites, you need additional logging to monitor access to your data and the performance of your application. An effective logging solution enhances security and improves […]
You don’t have to actually use ElasticSearch and Kibana, but it is just so you can get an idea of how people analyze S3 access logs
Thanks @Constantine Kurianoff! I’ll check that out! I’m used to see all the logs in CloudWatch, so I was wondering that this could be the same, but probably the trend is to just use Elasticsearch to ingest them into that engine for further use.
Yes, if you look for rich data visualization, you go with either ElasticSearch, or Grafana, or anything of that kind. I don’t have much experience with CloudWatch, but I think you can build dashboards and some good visualizations there as well
2021-10-18
Hi Everyone- question about RDS/postgres and schema permissions. We have an RDS instance that was set up with a bastion host that forwards traffic from port 8887 to 5432 on the RDS instance. I’ve been accessing the instance with SSH tunneling and have successfully created and populated tables. I wanted to POC a data viz program and changed the instance to publicly accessible but only open to traffic on port 5432 for a couple of IP addresses. I can successfully connect to the instance now, but all the schemas I created while using SSH tunneling disappear. I’ve checked out permissions for the user but nothing stands out.
Is there a setting somewhere in RDS/postgres that connects port to schema privileges?
did you use the same db user?
are you looking in the same database? Different programs can offer a different default
I connected to the same database as the same user. The only thing I changed was the port.
2021-10-19
Hi all, we got EKS that holds our web app
When we try to upload a evan a small, 250 KB file it is throwing 413
error response that is for file size limit exceed.
The php.ini file seems to be ok.
• post_max_size
= 12M
• upload_max_filesize
= 10M
• memory_limit
= 128M
Any ideas what else we should check?
2021-10-25
Folks,
Issue #42 of my low-volume (once a week) newsletter “AWS Security Digest” is out.
What you will find:
- Highlight of the week
- Change since last week on AWS Managed IAM Policies
- Curated Cloud Security Newsletters
- AWS API changes
- IAM Permissions changes
- Most upvoted posts on r/AWS
- Top shared links on Twitter (by cloudsec folks)
- Most engaged Tweets from the community
Adopt a slow-tech approach by reading only essential, digest summary of what is going on in the AWS Security landscape.
With already 300+ subscribers with famous folks from @netflix and @amazon, you can’t go wrong :)
https://app.mailbrew.com/zoph/aws-security-digest-HrkhwqNrwBBk
AWS Security Digest Weekly Newsletter. Curated by Victor GRENU.
2021-10-27
:wave: Is there a way to get the number of bytes written to S3 per day for a given AWS acccount (bonus if you can wildcard the bucket name, e.g. foo-dev-*
)?
Per bucket, use access logs
Overall, you can probably read the billing report data
2021-10-28
FYI, new feature for containers on AWS launches in less than 3 hours on Twitch: https://twitter.com/iamvlaaaaaaad/status/1453754371880230918
AWS pre-announcing a new feature is… interesting.
Uuu, a new feature for containers on AWS launches in 3-ish hours
I love the idea of doing this over Twitch! In addition to the awesome presenters, we get live interaction, Q&A, long-form demos, and thoughts from the actual devs! https://twitter.com/realadamjkeller/status/1453446212955242497
We have a special #ContainersFromTheCouch feature launch episode tomorrow @ 12pm PST. You won’t want to miss it! Links below, and get in line because from what I hear the crowds are going to be crazy!
https://www.youtube.com/watch?v=CiINJxFeNVg https://www.twitch.tv/aws
2021-10-29
I made an aws profile switcher in GO based on awsp if anyone wants to check it out https://github.com/pjaudiomv/awsd
AWS Profile Switcher in Go. Contribute to pjaudiomv/awsd development by creating an account on GitHub.
Looks great, I’ll check it out. I actually submitted this sort of functionality to the AWS plugin for OhMyZSH, there’s the ‘asp’ alias if you have the plugin enabled, just type ‘asp’ and the Tab key to cycle through the available profiles
AWS Profile Switcher in Go. Contribute to pjaudiomv/awsd development by creating an account on GitHub.
Oh no way, I’ll have to check that out. I didn’t know there was a aws plug-in. Thanks
Question: does this approach work with profiles updated by SAML logins?
I have several profiles but they also have the AWS_SESSION_EXPIRATION
and AWS_SESSION_TOKEN
values set after logging in using an SSO app.
Alls it does is set or unset the AWS_PROFILE var
ok. got it.
@managedkaos I think Leapp is better for solving issues with SAML authentication: https://github.com/Noovolari/leapp
Leapp is the DevTool to access your cloud. Contribute to Noovolari/leapp development by creating an account on GitHub.
Yea leapp is still a little green it has great promise though. But if your using saml for auth def best solution I’ve found right now
ok! in the meantime, i am using python virtual envs to switch AWS accounts. it works good since i can set other things in the environment that are project/account specific.