#aws (2024-05)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2024-05-04
Hi everyone Quick question - does anyone have experience with serverless application and vector db?
2024-05-07
2024-05-08
Anyone using microsecond time accuracy on AWS EC2? The availability is still limited at least based on this https://aws.amazon.com/about-aws/whats-new/2024/04/amazon-time-sync-service-microsecond-accurate-time-additonal-ec2-instance-types/ which is 2 weeks old. How does one make chrony use it?
2024-05-16
2024-05-20
Why it’s showing unauthorized, i tried chnaging outbound configuration but it didnt work. Any suggestions?
Is your instance using IMDSv2? If so you need a token.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-metadata-v2-how-it-works.html
IMDSv2 uses session-oriented requests. With session-oriented requests, you create a session token that defines the session duration, which can be a minimum of one second and a maximum of six hours. During the specified duration, you can use the same session token for subsequent requests. After the specified duration expires, you must create a new session token to use for future requests.
Thankyou so much for the solution, it worked.
2024-05-21
This message was deleted.
2024-05-22
is there someone try to use datastream gcp? AWS RDS AuroraMySQL to connect in datastream profile? I tried to use VPC peering but only the instances are connecting. I alreay setup VPN on this and also transit gateway in aws.
Interesting… didn’t know that was possible
2024-05-23
getting this error in api-gateway-deployment creation
{"@level":"error","@message":"Error: creating API Gateway Deployment: operation error API Gateway: CreateDeployment, https response error StatusCode: 400, RequestID: dac22dcb-084c-41ff-8d0c-28b8006fa136, BadRequestException: AWS ARN for integration contains invalid action","@module":"terraform.ui","@timestamp":"2024-05-24T03:53:49.371354Z","diagnostic":{"severity":"error","summary":"creating API Gateway Deployment: operation error API Gateway: CreateDeployment, https response error StatusCode: 400, RequestID: dac22dcb-084c-41ff-8d0c-28b8006fa136, BadRequestException: AWS ARN for integration contains invalid action","detail":"","address":"module.fna-publisher-management.module.apigateway.aws_api_gateway_deployment.alertutils-deployment-1","range":{"filename":"../../../../projects/supply-experience/fna-publisher-management/prod/apigateway/main.tf","start":{"line":143,"column":65,"byte":5061},"end":{"line":143,"column":66,"byte":5062}},"snippet":{"context":"resource \"aws_api_gateway_deployment\" \"alertutils-deployment-1\"","code":"resource \"aws_api_gateway_deployment\" \"alertutils-deployment-1\" {","start_line":143,"highlight_start_offset":64,"highlight_end_offset":65,"values":[]}},"type":"diagnostic"}
This is the api-gateway main.tf and it is failing for the deployment part
# Define the REST API
resource "aws_api_gateway_rest_api" "alertutils" {
api_key_source = "HEADER"
description = "alert utils"
disable_execute_api_endpoint = false
endpoint_configuration {
types = ["EDGE"]
}
minimum_compression_size = -1
name = "alertutils"
}
# Define the resource
resource "aws_api_gateway_resource" "alertutils-resource-1" {
parent_id = aws_api_gateway_rest_api.alertutils.root_resource_id
path_part = "preview"
rest_api_id = aws_api_gateway_rest_api.alertutils.id
}
# Define the GET method
resource "aws_api_gateway_method" "alertutils-method-get" {
api_key_required = false
authorization = "NONE"
http_method = "GET"
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
rest_api_id = aws_api_gateway_rest_api.alertutils.id
}
# Define the POST method
resource "aws_api_gateway_method" "alertutils-method-post" {
api_key_required = false
authorization = "NONE"
http_method = "POST"
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
rest_api_id = aws_api_gateway_rest_api.alertutils.id
}
# Define the GET method response
resource "aws_api_gateway_method_response" "alertutils-method-response-get" {
http_method = aws_api_gateway_method.alertutils-method-get.http_method
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
response_models = {
"application/json" = "Empty"
}
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = false
"method.response.header.Access-Control-Allow-Methods" = false
"method.response.header.Access-Control-Allow-Origin" = false
}
rest_api_id = aws_api_gateway_rest_api.alertutils.id
status_code = "200"
}
# Define the POST method response
resource "aws_api_gateway_method_response" "alertutils-method-response-post" {
http_method = aws_api_gateway_method.alertutils-method-post.http_method
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
response_models = {
"application/json" = "Empty"
}
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = false
"method.response.header.Access-Control-Allow-Methods" = false
"method.response.header.Access-Control-Allow-Origin" = false
}
rest_api_id = aws_api_gateway_rest_api.alertutils.id
status_code = "200"
}
# Define the GET integration
resource "aws_api_gateway_integration" "alertutils-integration-get" {
cache_namespace = aws_api_gateway_resource.alertutils-resource-1.id
connection_type = "INTERNET"
http_method = aws_api_gateway_method.alertutils-method-get.http_method
passthrough_behavior = "WHEN_NO_MATCH"
request_templates = {
"application/json" = "{\"statusCode\": 200}"
}
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
rest_api_id = aws_api_gateway_rest_api.alertutils.id
timeout_milliseconds = 29000
type = "MOCK"
}
# Define the POST integration
resource "aws_api_gateway_integration" "alertutils-integration-post" {
cache_namespace = aws_api_gateway_resource.alertutils-resource-1.id
connection_type = "INTERNET"
http_method = aws_api_gateway_method.alertutils-method-post.http_method
passthrough_behavior = "WHEN_NO_MATCH"
request_templates = {
"application/json" = "{\"statusCode\": 200}"
}
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
rest_api_id = aws_api_gateway_rest_api.alertutils.id
timeout_milliseconds = 29000
type = "MOCK"
}
# Define the GET integration response
resource "aws_api_gateway_integration_response" "alertutils-integration-response-get" {
http_method = aws_api_gateway_integration.alertutils-integration-get.http_method
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
"method.response.header.Access-Control-Allow-Methods" = "'GET,POST'"
"method.response.header.Access-Control-Allow-Origin" = "'*'"
}
rest_api_id = aws_api_gateway_rest_api.alertutils.id
status_code = "200"
}
# Define the POST integration response
resource "aws_api_gateway_integration_response" "alertutils-integration-response-post" {
http_method = aws_api_gateway_integration.alertutils-integration-post.http_method
resource_id = aws_api_gateway_resource.alertutils-resource-1.id
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
"method.response.header.Access-Control-Allow-Methods" = "'GET,POST'"
"method.response.header.Access-Control-Allow-Origin" = "'*'"
}
rest_api_id = aws_api_gateway_rest_api.alertutils.id
status_code = "200"
}
# Define the deployment
resource "aws_api_gateway_deployment" "alertutils-deployment-1" {
rest_api_id = aws_api_gateway_rest_api.alertutils.id
depends_on = [
aws_api_gateway_method.alertutils-method-get,
aws_api_gateway_integration.alertutils-integration-get,
aws_api_gateway_method_response.alertutils-method-response-get,
aws_api_gateway_integration_response.alertutils-integration-response-get,
aws_api_gateway_method.alertutils-method-post,
aws_api_gateway_integration.alertutils-integration-post,
aws_api_gateway_method_response.alertutils-method-response-post,
aws_api_gateway_integration_response.alertutils-integration-response-post,
]
}
# Define the stage
resource "aws_api_gateway_stage" "alertutils-v1-stage" {
cache_cluster_enabled = false
cache_cluster_size = "0.5"
deployment_id = aws_api_gateway_deployment.alertutils-deployment-1.id
rest_api_id = aws_api_gateway_rest_api.alertutils.id
stage_name = "v1"
xray_tracing_enabled = false
}
This sounds like a #terraform question
2024-05-24
How to identify in which s3 bucket is the opensearch cluster backup stored when the backup is automated?
2024-05-25
2024-05-28
Id like to enable this root user deny scp but what if an s3 bucket or another resource policy was misconfigured locking out all users except the root user? Wouldn’t we need a list of exceptions?
https://docs.aws.amazon.com/IAM/latest/UserGuide/root-user-tasks.html
Learn which tasks in AWS require that you sign in using root user credentials.
i’d probably consider some logic around the scp statement and the condition, where i update some input with the excluded account, and the scp then allows the action for that account
Learn which tasks in AWS require that you sign in using root user credentials.
make the change as root, then revert the change so the root user is denied again
Why not just deny everything using not_actions
? This way we could deny everything except resource policies (s3 deletebucketpolicy, kms, secretsmanager, dynamodb, etc) and account settings
We could also limit those actions to a specific source ip to restrict it to a vpn which requires an idp
Something like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyRootUsage",
"Effect": "Deny",
"NotAction": [
"account:*",
"dynamodb:DeleteResourcePolicy",
"kms:PutKeyPolicy",
"organizations:*",
"s3:DeleteBucketPolicy",
"secretsmanager:DeleteResourcePolicy"
],
"Resource": "*",
"Condition": {
"StringLike": {
"aws:PrincipalArn": ["arn:aws:iam::*:root"]
},
"NotIpAddress": {
"aws:SourceIp": "1.2.3.4"
}
}
}
]
}
If you wanted to always allow root in every account to perform those actions, sure
2024-05-30
we have a containerd on kubernetes. and we have an ecr repo. We would like to use cert as pull trough cache. the config:
server = "<https://docker.io>"
[host."<https://12345678.dkr.ecr.eu-central-1.amazonaws.com/dockerhub>"]
capabilities = ["resolve","pull"]
I got 403 if I add the aws ecr get-login-password
results to the containerd config file what is not contstant so not the best.
w/0 that I got 401
Any idea?
To set up containerd to use AWS ECR as a pull-through cache for Docker Hub, you need to configure authentication properly. The error you’re encountering (403 and 401) indicates issues with permissions and authentication.
-
Create an IAM Policy for your ECR repository that allows
ecr:BatchCheckLayerAvailability
,ecr:GetDownloadUrlForLayer
, andecr:GetAuthorizationToken
actions. -
Attach the Policy to an IAM Role or User that your Kubernetes nodes will use.
-
Retrieve ECR Credentials: Use AWS CLI to get the ECR login password and configure it in containerd. Although you’ve mentioned the password is not constant, you can use a Kubernetes secret and refresh it periodically using a cron job or another automation tool.
-
Configure containerd: Update the containerd configuration file (
/etc/containerd/config.toml
) to include the AWS ECR credentials. Here’s an example configuration:
toml
version = 2
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["<https://12345678.dkr.ecr.eu-central-1.amazonaws.com/dockerhub>"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."12345678.dkr.ecr.eu-central-1.amazonaws.com".auth]
username = "AWS"
password = "<YOUR_ECR_PASSWORD>"
[plugins."io.containerd.grpc.v1.cri".registry.configs."12345678.dkr.ecr.eu-central-1.amazonaws.com".tls]
ca_file = ""
cert_file = ""
key_file = ""
- Automate ECR Login: Use a Kubernetes CronJob or DaemonSet to periodically refresh the ECR login credentials. Here’s an example script to update the containerd config:
bash
#!/bin/bash
# Get ECR login password
PASSWORD=$(aws ecr get-login-password --region eu-central-1)
# Update containerd config
sudo sed -i "s|password = \".*\"|password = \"${PASSWORD}\"|" /etc/containerd/config.toml
# Restart containerd to apply changes
sudo systemctl restart containerd
Create a Kubernetes CronJob to run this script periodically:
yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-containerd-creds
spec:
schedule: "0 * * * *" # Every hour
jobTemplate:
spec:
template:
spec:
containers:
- name: update-creds
image: amazonlinux:2
command: ["/bin/bash", "-c"]
args: ["<your_script_path>/update_containerd_creds.sh"]
env:
- name: AWS_REGION
value: "eu-central-1"
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-secret-access-key
restartPolicy: OnFailure
- Test the Configuration: Restart containerd and try pulling an image to verify the configuration:
bash
sudo systemctl restart containerd
ctr image pull docker.io/library/your-image:latest