#kubernetes (2024-03)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2024-03-15

rohit avatar

still fairly new to k8s, but wracking my brain on a small issue:

we have a main app service that will have a sidecar container. this sidecar container provides a “broker” (of sorts) to facilitate writing / getting secrets from a customer’s external secrets management system. this sidecar container allows the main app to make requests to get / write / delete secrets.

we have a k8s job that provisions a database (db, tables, schemas, grants, etc). this job will need to also get secrets from this sidecar container.

i think it’s possible to expose ports for the main app and sidecar container. that way we have this setup:

main-service.svc.cluster.local:8443 - main app main-service.svc.cluster.local:6666 - sidecar

is it possible for another pod or k8s job to interact with this sidecar container by using the main service’s DNS + port for the sidecar?

i currently we have this secrets-broker as it’s own service/pod so other pods (that support our product) can communicate with it and fetch/write secrets. but getting pushback and told this needs to be a sidecar.

i am open to any suggestions to improve our security posture here.

Moti avatar

why not use the same sidecar on the “k8s job” ?

1

2024-03-18

2024-03-19

rohit avatar

Does anyone know how we can have our pods authenticate to an external vault (hashicorp) to fetch credentials, without having to use or provide a VAULT_TOKEN via kubernetes secrets? is there a more secure way?

rohit avatar

Unfortunately we are using standalone clusters and VMware’s TKG clusters. so im sure using pod identities is not possible?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Kubernetes - Auth Methods | Vault | HashiCorp Developerattachment image

The Kubernetes auth method allows automated authentication of Kubernetes Service Accounts.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


The kubernetes auth method can be used to authenticate with Vault using a Kubernetes Service Account Token. This method of authentication makes it easy to introduce a Vault token into a Kubernetes Pod.

rohit avatar

Ah, thank you Erik! Yes this is what we’re using now. But in our case, we deploy our helm charts to a customer’s k8s cluster (assuming they have this VAULT_TOKEN already in k8s secrets), and then we use that to fetch secrets from a customer’s external vault system.

The concern a product owner had was that this TOKEN is just base64 encoded as a k8s secret, and that it is “unsafe”.

The concern we had as the dev team was how do we ensure a VAULT_TOKEN is still valid for our services running in a customer cluster, and how it this token being refreshed / updated?

rohit avatar

Our helm install works seamlessly if the token is already there, but for our testing, how can we automate getting the token created in our k8s cluster for our helm install to use

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Kubernetes as an OIDC provider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It sounds like you’re not using that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then they should be short-lived and automatically rotated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(I don’t have first hand experience, but my mental model is what you want is something like GitHub OIDC + AWS, you want Kubernetes OIDC with Vault)

2024-03-25

Zing avatar

hey there, curious how people are handling pre-deploy checks in CICD these days. I think we’ll end up using conftest for terraform-related tasks, but still looking for the best option for our argocd+kubernetes deployments. I think conftest works here too (evaluate the k8s manifests on PR opening) but looking for some advice. Thanks!

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

The way that we do it is we first build the image as part of CI, launch into that image via docker compose, and then run tests from there. If that passes, then move onto CD, where we deploy to the cluster with argocd. First dev, then staging, then prod, validating each environment as necessary

rohit avatar

What are people using to map IAM roles to pods in non-cloud k8s clusters? kube2iam?

Vincent Van der Kussen avatar
Vincent Van der Kussen

Vault could be an option

rohit avatar

@Vincent Van der Kussen can you explain a bit more? we want our pods to assume IAM roles for least priv. how does vault help here?

Nate McCurdy avatar
Nate McCurdy

Is your goal to be able to auth to and interact with the AWS API’s from a non-cloud k8s cluster? If so, and assuming you would be connecting from a tool built on the modern AWS SDKs, a good way to do that is through OIDC federation. Basically it’s IRSA (IAM Roles for Service Accounts), but for a non-EKS cluster.

The process involves having your k8s cluster’s OIDC provider information (a discovery file and signing keys) available on the public internet (this can be in an S3 bucket). Then creating an Identity Provider in IAM that points to those public files.

After that, you’d deploy the AWS EKS Pod Identity Webhook to your non-EKS cluster (https://github.com/aws/amazon-eks-pod-identity-webhook), then use service account annotations to specify which IAM role to use for a pod.

The webhook converts annotations to environment variables and token files that the AWS SDK libraries pick up and use for authentication.

More info and a mostly-complete guide are at:

https://github.com/aws/amazon-eks-pod-identity-webhook/blob/master/SELF_HOSTED_SETUP.md

https://github.com/mjnagel/k3d-irsa

Nate McCurdy avatar
Nate McCurdy

It’s not trivial, but is possible.

rohit avatar

Our specific goal is being able to deploy our helm project (product) on to any k8s cluster, eks or standalone for now. We asked our customer what they are currently doing to support IAM to serviceaccount “mapping” to interact with AWS resources. My thought was to use kube2iam (not working) or kiam to allow our pod to use AWS resources.

All this without any EKS conventions or capabilities (unfortunately)

rohit avatar

We are telling the customer (beforehand) to provision an IAM role + policy for our pod -> we share the helm project -> customer installs helm project -> pods can use the IAM role to perform X, Y and Z.

rohit avatar

i figured this would be “simple” on standalone clusters, but I am unable to make any headway since most use IRSA/eks pod identity/etc/eks pod annotation for IAM role

Nate McCurdy avatar
Nate McCurdy

IRSA/EKS Pod Identity are definitely the happy path, and I would recommend that approach if possible.

What I described above is exactly the same as IRSA , but for non-EKS clusters. However it does require some setup on the cluster (the webhook and publicly available OIDC provider info) and in IAM (the creation of an Identity Provider and an IAM role with a trust policy to that provider).

Vincent Van der Kussen avatar
Vincent Van der Kussen

Vault can fetch credentials (sts/iam user access keys) for pods with the right annotations. If you are only on AWS then IRSA is a lot easier but I have no idea if it works when running your own k8s

rohit avatar

Currently looking at kiam and kube2iam, both projects are no longer really supported or active.

2024-03-26

2024-03-27

rohit avatar

Does anyone know if kube2iam or kiam works for a kubernetes job? I’ve been searching to no avail…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes because both work for pods, and jobs are just pods under the hood. Note that kiam is deprecated, and support may cease soon. See the projects GitHub.

rohit avatar

Yeah to my dismay everyone is moving towards IRSA and leveraging managed services

rohit avatar

thank you Erik!

2024-03-28

Ashutosh Apurva avatar
Ashutosh Apurva

how to configure multiple backend api into ingress resources? Please suggest on this. Also, share some template of values.yaml file on the same.

John avatar

Hey there, I m having an issue with external-dns in my k8s cluster. It does not create routes in route53 and exhausted all my troubleshooting options. Here what I have verified

  1. role_arn has the right permissions and the logs from the external-dns show that it can authenticate to update route53
    time="2024-03-27T00:10:27Z" level=info msg="Applying provider record filter for domains: [sbx.myexample.io. .sbx.myexample.io.]"
    time="2024-03-27T00:10:27Z" level=info msg="All records are already up to date"
    
  2. There is connectivity from the pod to aws services
  3. No errors in the pod
  4. The sources to watch for are service,ingress
  5. there is annotation for my ingress to add to external-dns
      annotations:
     external-dns.alpha.kubernetes.io/include: "true" 
    

    any help is welcome, ran out of options why the routes are not added in route53. Thank you

Adi avatar

could you please add log level to “debug” and see

John avatar
│ time="2024-03-28T20:05:19Z" level=debug msg="Refreshing zones list cache"                                                  │
│ time="2024-03-28T20:05:19Z" level=debug msg="Considering zone: /hostedzone/<ZONEID> (domain: sbx.mydomain.io. │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service metrics-server/metrics-server"   │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service default/flink-jobmanager"        │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service default/kubernetes"              │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service default/nginx"                   │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service echo/echo-server"                │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service external-dns/external-dns"       │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service karpenter/karpenter"             │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from service kube-system/kube-dns"            │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from ingress external-dns/nginx"              │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from ingress default/flink-jobmanager-ingress │
│ time="2024-03-28T20:05:19Z" level=debug msg="No endpoints could be generated from ingress echo/echo-server"                │
│ time="2024-03-28T20:05:19Z" level=debug msg="Refreshing zones list cache"                                                  │
│ time="2024-03-28T20:05:19Z" level=debug msg="Considering zone: /hostedzone/<ZONEID>  (domain: sbx.mydomain.io. │
│ time="2024-03-28T20:05:19Z" level=info msg="Applying provider record filter for domains: [sbx.mydomain.io. .sbx.mydomain.io
John avatar

looks like for somereason No endpoints could be generated from service

Adi avatar

yeah not sure exactly whats wrong, are you able to access the service via alb dns

Hao Wang avatar
Hao Wang

do you see any service and ingress?

John avatar

yeh fixed it, looks like the problem was with my alb-controller not creating the alb which caused the external-dns not to create the endpoint in route53

1
aws1
1
    keyboard_arrow_up