#kubernetes (2021-07)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2021-07-02

Joaquin Menchaca avatar
Joaquin Menchaca

Anyone know how to do mixed https + grpc traffic on the same ingress? I tried out ingress-nginx:

annotations:   
  cert-manager.io/cluster-issuer: letsencrypt-staging
  kubernetes.io/ingress.class: nginx
  nginx.ingress.kubernetes.io/ssl-redirect: "true"
  nginx.ingress.kubernetes.io/backend-protocol: GRPC

And I was setting up these rules:

              rules:
                - host: alpha.devopsstudio.org
                  http:
                    paths:
                      - path: /
                        backend:
                          serviceName: demo-dgraph-alpha
                          servicePort: 8080
                - host: dgraph.devopsstudio.org
                  http:
                    paths:
                      - path: /
                        backend:
                          serviceName: demo-dgraph-alpha
                          servicePort: 9080

But only GRPC now works correctly, HTTPS service does not. Is there a way to get both of these to work?

Joaquin Menchaca avatar
Joaquin Menchaca

The workaround was to create two ingresses, but I would like to just have one, for both traffic.

mfridh avatar

A good description of why this doesn’t work with nginx is here: https://argoproj.github.io/argo-cd/operator-manual/ingress/#kubernetesingress-nginx

Contour, as explained there as well, is slightly different as it reads the backend-protocol from the target service resources and can thus act differently depending on the routing.

Joaquin Menchaca avatar
Joaquin Menchaca

Do you know the article from Contour, currently sleuthing through their site on this…

Alyson avatar

Hi, Is anyone else here with this beautiful issue on AWS EKS?

The problem is intermittent. Sometimes it happens and sometimes it doesn’t.

kubectl logs pod/external-dns-7dd5c6786d-znfr5

<https://100.0.1.239:10250/containerLogs/default/external-dns-7dd5c6786d-znfr5/external-dns?follow=true>": x509: cannot validate certificate for 100.0.1.239 because it doesn't contain any IP SANs
$ kubectl version --short                                                            
Client Version: v1.18.9-eks-d1db3c
Server Version: v1.19.8-eks-96780e
Joaquin Menchaca avatar
Joaquin Menchaca

On ingresses, do ingress get automatically converted to beta? I keep getting this message:

Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress

After deploying:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-dgraph-ingress-grpc
  labels:
    app: dgraph
    component: alpha
  annotations:
    cert-manager.io/cluster-issuer: {{ requiredEnv "ACME_ISSUER" }}
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: GRPC
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - hosts:
        - "dgraph.{{ requiredEnv "AZ_DNS_DOMAIN" }}"
      secretName: tls-secret
  rules:
    - host: dgraph.{{ requiredEnv "AZ_DNS_DOMAIN" }}
      http:
        paths:
        - backend:
            service:
              name: demo-dgraph-alpha
              port:
                number: 9080
          path: /
          pathType: ImplementationSpecific

2021-07-04

Joaquin Menchaca avatar
Joaquin Menchaca

On the topic of ingress-nginx w/ gRPC, it actually works. And it was easier than I thought, I just needed to code my own gRPC client to understand it.

Anyhow, I wrote a blog in case anyone tackling this problem as well. It’s on AKS, but underlying principals are the same: https://joachim8675309.medium.com/aks-with-grpc-and-ingress-nginx-32481a792a1

AKS with GRPC and ingress-nginxattachment image

Using GRPC with ingress-nginx add-on with AKS

2021-07-12

Andy avatar

Hi, we’re using Istio behind an nginx proxy. Occasionally we’re seeing what looks like other users responses being returned to a user.

i.e. client A calls a CORS endpoint and sees a response that should have been returned to client B. This happens around 1 in 10 times when repeatedly calling an endpoint (while 100s of users are simultaneously hitting the same endpoint)

The network flow is as follows: ALB -> nginx -> (proxy) -> NLB -> Istio -> NodeJS app

It looks like nginx may be returning the wrong responses to different users. Does that even sound plausible?

Andy avatar

We’re trying to enable the tracing headers to confirm that we’re definitely seeing the wrong responses go to different users.

William Morgan avatar
William Morgan

Do you have some kind of traffic shifting or header-based routing enabled with Istio?

Tim Birkett avatar
Tim Birkett

What does the nodejs app do in the background? Does it use queues? Does it store user session data in a database? Is it just a single endpoint that suffers this problem?

Andy avatar

So we finally got the tracing headers working and it appears that there is NOT a request/response mismatch So it looks like the problem is happening within the NodeJS app. I don’t know the full details so the developers are going to do some debugging. Thanks for chiming in guys

Tim Birkett avatar
Tim Birkett

TCP would be very broken if it were. I’ve seen similar things where queues are used to store requests and responses on with scaled out applications, key collisions in datastores, or non-atomic ID creation when the app is scaled out causing multiple users to have the same ID on the thing that they’re working on.

2021-07-13

2021-07-14

2021-07-16

2021-07-17

mfridh avatar

What a :confused: thing I found…

my deployment/coredns container wasn’t adding tcp:53 in ports even though it was there in the yaml and it was even in the last-applied-configuration annotation …. it was silently being NOT THERE

If I subsequently removed the TCP containerPort from the yaml, the kubectl diff said I was removing the UDP containerPort.

I can add a TCP containerPort: 5353, but not 53. Had to delete deployment and recreate it! reproducible 10/10 also on a renamed coredns2 deployment I set up on the side… Amazon EKS v1.19.

mfridh avatar
Services with same port, different protocol display wrongly in kubectl and have wrong merge key · Issue #39188 · kubernetes/kubernetesattachment image

User reported: I am running a service with both TCP and UDP: spec: type: NodePort ports: - protocol: UDP port: 30420 nodePort: 30420 - protocol: TCP port: 30420 nodePort: 30420 but kubectl describe…

2021-07-18

2021-07-19

Zach avatar

I’m having some difficulties with the EKS Cluster (v0.42.1 - happy to have a workaround for the aws-auth configmap issue now!) and Node Group (v0.22.0) modules with the AWS VPC CNI Pod Security Groups. What I’m finding is that the module creates a security group and it seems to get attached to maybe … the nodes? But once I started using pod SGs I was finding that only the EKS-managed-SG (which the cluster creates itself) seems to matter for ingress/egress rules to allow the pod to talk back to the cluster. For example I wasn’t able to get my pods to resolve DNS at all until I added port 53 ingress on the EKS SG, and I couldn’t get my pod to receive traffic from my ingress until I allowed traffic from the EKS SG. None of these rules made any difference when I tried creating them on the SG the cloudposse module created. Is that expected?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(sorry, jeremy is AFK for a bit longer)

Zach avatar

No worries! Just trying to figure out whether I’m doing something wrong

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Zach @Erik Osterman (Cloud Posse) It will still be a few days before I get this, but my recollection is

• A long time ago, we had to (or wanted to) create a security group for the cluster, or, before EKS had managed nodes, we wanted to create a separate security group for the worker nodes, so that, for example, the worker nodes could be open to the internet while the EKS cluster (meaning the master nodes) were not.

• At some point, EKS created its own security group and created managed nodes, and we found the distinction unhelpful, so we just started putting the managed nodes in the security group the EKS cluster created.

• I certainly could be wrong, but IIRC, the security group the module creates (as opposed to the one the EKS cluster creates) is unused by default. This will all get revisited in the upcoming overhaul of how we manage security groups. @Zach If you would like to open an issue or feature request explaining your use case and how you would like to see it supported, I will certainly consider it. Might take a while to get to.

Zach avatar


IIRC, the security group the module creates (as opposed to the one the EKS cluster creates) is unused by default.
I think this is tracking with what I”m finding. That’s all fine, I just wanted to check in to see that this was somewhat expected. It threw me for a loop when I was trying to get pod security groups working but I eventually figured out the issue

rei avatar

Currently having difficulties with this setup: Does anyone have tried to mount an EFS filesystem connected to vpc1 but having the EKS cluster in vpc2? Peering connection between vpc1&2 is working. Following the oficial AWS docs to mount an EFS filesystem with the newest CSI driver. If the filesystem is in the same vpc as the cluster it works as expected (dynamic mount example). But trying to mount a second EFS filesystem returns an error on Kubernetes. Additional DNS host entries also added (as recommended and required by the CSI driver). Here’s the catch: on an EC2 instance mounting the EFS filesystem (using the IPv4) works… Any ideas?

rei avatar

as always it was DNS had a typo in the host alias settings

2021-07-20

Shreyank Sharma avatar
Shreyank Sharma

Hi, we are kubernetes cluster deployed using kops. which was working fine from an year.

suddenly in kube-system namespace a pod named etcd-server-events-ip-<master-interal-ip -here>,,, started crashloopback off.. with the following logs 2021-07-20 15:58:12.071345 I | etcdmain: stopping listening for peers on <http://0.0.0.0:2381> 2021-07-20 15:58:12.071351 C | etcdmain: cannot write to data directory: open /var/etcd/data-events/.touch: read-only file system

i wanted to know what is the responsibility of the etcd-server-events pod.

thanks

2021-07-26

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

FYI: one of the most discussed EKS annoyances is now solved: the 1.9.0 release of the amazon-vpc-cni adds support for a lot more pods per node!

I imagine a bigger official announcement is coming soon.

1
Zach avatar

is the ‘addon’ updated already?

Zach avatar

EKS addon was released this morning - you still have to enable a setting to turn on the prefixes, and then you have to raise the max pod setting on the nodes

2
1
Zach avatar

I am fairly confident this does NOT work with pod security groups

2021-07-27

R Dha avatar

what are good resources to learn kubernetes for beginners?

Andy Miguel (Cloud Posse) avatar
Andy Miguel (Cloud Posse)

https://www.youtube.com/watch?v=_YTzuIyBsUg this topic came up last year on office hours hope this helps

1
Andy Miguel (Cloud Posse) avatar
Andy Miguel (Cloud Posse)

lots of links in the description

1

2021-07-28

2021-07-29

Shreyank Sharma avatar
Shreyank Sharma

Hi,

We are using Kubernetes deployed in AWS using Kops, in its own Public VPC. We had 2 requirements.

  1. Pod inside Kubernetes have to invoke AWS Lambda
  2. Lambda has to access resources inside Kubernetes

for the 1st requirement we created an inline policy for all nodes to invoke lambda and passed AWS access key and Secret key. inside pod, created a lambda inside same VPC, Pod invoke lambda, it worked fine,

Now, for 2nd requirement is there any way Lambda can access some data inside a pod without any public end-point ? any Kubernetes feature to allow this so communication happens securely?

Thanks in Advance.

azec avatar

Hi Shreyank!
for the 1st requirement we created an inline policy for all nodes to invoke lambda and passed AWS access key and Secret key.
I think that this is not a good practice. You might consider use of K8S Service Accounts with each K8S deployment that has a need to call Lambda APIs or invoke Lambda. See:

Configure Service Accounts for Pods

Managing Service Accounts The IAM Role which will be created for binding with K8S Service-Account then can have enough permissions to invoke Lambda. Then it is just the matter of the applications running on pods using AWS SDK for Lambda or maybe HTTP call libraries (depending on your use-case for Lambda).

1
azec avatar


Lambda has to access resources inside Kubernetes
What is the need for this? There is so many other options if your workloads are already all in AWS. Have you considered AWS SQS or any of the DBs (DynamoDB, RDS, Aurora) …etc.

    keyboard_arrow_up