#kubernetes (2020-04)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2020-04-01

jedineeper avatar
jedineeper

Anyone found a more automated way to roll k8s nodes thru replacement with terraform other than spinning up another asg and cordon/drain thru the old nodes before running tf again to remove them?

Zachary Loeber avatar
Zachary Loeber

That video represents a what a deployment pipeline looks like that pushes a whole kubernetes cluster out then installs airflow on it to then push pipelines that run data science jobs to the same cluster.

1
1
wannafly37 avatar
wannafly37

This..is..amazing.

2020-04-02

Zachary Loeber avatar
Zachary Loeber

Short script to get the latest version of minikube running on ubuntu 19.10: https://gist.github.com/zloeber/528bcce2e4b45465c940a08f10551ccb

2020-04-03

Zachary Loeber avatar
Zachary Loeber

FleetOps -> https://thenextweb.com/growth-quarters/2020/04/03/devops-isnt-enough-your-team-needs-to-embrace-fleetops/ (pretty much another way of saying you should treat everything as if it were part of a PaaS I think).

DevOps isn’t enough — your team needs to embrace FleetOpsattachment image

FleetOps is needed to run a fleet of hundreds (or thousands!) of websites and applications securely across your organization.

Zachary Loeber avatar
Zachary Loeber

and to follow that up, this nifty looking project from Rancher developed by a dude I follow on twitter: https://rancher.com/blog/2020/fleet-management-kubernetes/

Fleet Management for Kubernetes is Hereattachment image

Fleet is new open source project from the team at Rancher focused on managing fleets of Kubernetes clusters. Ever since Rancher 1.0, Rancher has provided a central control plane for managing multiple clusters. As pioneers of Kubernetes multi-cluster management, we have seen firsthand how users have consistently increased the number of clusters under management. We are already seeing interest from users who want to manage tens of thousands or even millions of clusters in the near future.

1
btai avatar

darn. I had at one point tried to build an internal cluster management tool and wanted to call it fleet. Because it was essentially managing a fleet of kube clusters (and keeping with the ocean/ship theme) never got around to buildint it out completely

Fleet Management for Kubernetes is Hereattachment image

Fleet is new open source project from the team at Rancher focused on managing fleets of Kubernetes clusters. Ever since Rancher 1.0, Rancher has provided a central control plane for managing multiple clusters. As pioneers of Kubernetes multi-cluster management, we have seen firsthand how users have consistently increased the number of clusters under management. We are already seeing interest from users who want to manage tens of thousands or even millions of clusters in the near future.

btai avatar

my proof

Zachary Loeber avatar
Zachary Loeber

figures right? well fleet looks open source maybe you can use it anyway

Zachary Loeber avatar
Zachary Loeber

what module do you use for Go log output anyway?

btai avatar

@Zachary Loeber i used https://github.com/sirupsen/logrus but i havent been doing a ton of Go development in the last few years so I’m prob not the best person to ask

sirupsen/logrus

Structured, pluggable logging for Go. Contribute to sirupsen/logrus development by creating an account on GitHub.

rms1000watt avatar
rms1000watt

Hi all, weird question for ya

rms1000watt avatar
rms1000watt

EKS 1.14. 1 cluster. 2 namespaces. Opened up SG (for debugging). amazon-k8s-cni:v1.5.7

Deployed svc + deployment in both namespaces. I have a pod from both namespaces on the same ec2 instance. I have a VPN giving me access to the cluster.

I can curl 1 pod in 1 namespace. I can not curl the other pod in the other namespace. All the k8s specs for svc + deployment are the same. They’re both using secondary IPs.

I realize this is hyper specific, but just curious if this sounds familiar to anyone

(I’ve tried to isolate it down to just 2 identical pods in different namespaces)

Guessing it’s related to some hardcore networking issue in the CNI.. I’m able to hit the pods from within the same VPC with the same CIDR block without issue.. but when I leave the CIDR block, it causes trouble

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve encountered something that sounds similar when the subnets aren’t correctly configured with route tables or the wrong subnets are passed to EKS. In this case, pod(1) is on node A, pod (2) is on node B; node A and node B are on different subnets.

btai avatar

do you have network policies? that could be different for both namespaces?

rms1000watt avatar
rms1000watt

Erik: same node, same subnet

Btai: no NACLs in AWS. But I’m guessing you’re referring to k8s network policies.. uhhh no clue, but I’ll look for it

rms1000watt avatar
rms1000watt

I want to try and look at CNI/SNAT failures or something.. but I trying to field for suggestions first before going down that rabbit hole

rms1000watt avatar
rms1000watt

no network policies in k8s

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Unless you are doing unconventional things, I would look for more obvious, user error type problems. Just based on my own experiences, I am usually at fault 99% of the time.

rms1000watt avatar
rms1000watt

i agree. In this case.. I think i got wrapped around the axel in playing with Service Endpoints.. opposed to the service itself

I was hitting the endpoint defined here..

kubectl -n dev-1 describe svc my-svc-name | grep -i endpoints:

Instead of just making the service fully available where I need it

rms1000watt avatar
rms1000watt

the technical issue still stands, but the need to solve it diminished.. since I’m just going to hit the service (as I should have all along)

2020-04-04

2020-04-05

2020-04-06

bradym avatar
jamiehannaford/what-happens-when-k8s

What happens when I type kubectl run? Contribute to jamiehannaford/what-happens-when-k8s development by creating an account on GitHub.

1
Zachary Loeber avatar
Zachary Loeber

anyone happen to tinker with kpt yet? https://googlecontainertools.github.io/kpt/

Kptattachment image

Kubernetes configuration package management

2020-04-07

rms1000watt avatar
rms1000watt

AWS EKS -> ALB Target Group with CNI question…

So on EKS, we have CNI enabled so each pod has an IP address ni the VPC Subnet. We have an ALB going directly to the Pods’ IP addresses. So if we have 50 pods, there are 50 entries in the target group.

Question: Has anyone spent time fine tuning Deregistration Delay in coordination with aws-alb-ingress-controller (for large deployments; many pods)?

EDIT1:

https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md#custom-attributes

!!!example - set the slow start duration to 5 seconds alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=5 - set the deregistration delay to 30 seconds alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30

Hmm, this is suggesting 30s, but dunno if it’s battle tested

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Depends wildly on your app. What’s the terminationGracePeriodSeconds set for that app? Think of it like this: pod is alive and ready and serving requests. Pod gets notified to stop work. How long does it serve requests? What happens to in-flight requests? How does that affect the app?

For starting time, it’s the same problem in reverse. How deep are your aliveness and more importantly your readiness checks? How do you know a pod is ready to serve requests? Does it serve requests in the first 10 minutes with super-high latency cause it’s still populating some caches?

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

More importantly, do your pods get replaced often? If not, you may not even need to stress about this

rms1000watt avatar
rms1000watt

They get replaced a few times day (multiple deployments a day). There’s 100+ pods in the deployment.

terminationGracePeriodSeconds: 30

rms1000watt avatar
rms1000watt

i guess i need to do my homework more on when the deregistration delay timer begins

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Yeah, sounds like you do need to worry about it

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

When a pod has to be replaced, the following flow happens: • SIGTERM is sent to the pod. Apps should get that as “dude, I got a notice to stop work graciously so I will start doing that”. Finish in progress-work, try to clean up nicely, and so on. At this time the ALBs should be set so no new connections are sent to these podsw • we wait for terminationGracePeriodSeconds

• SIGKILL is sent to the pod which kills all the containers inside by force

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Meanwhile, ALB Ingress Controller runs a loop every say 10s and checks for any new pods or any new LB changes and updates the ALBs accordingly

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

All of these have to make out and kiss in sync

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

It helps a lot to draw this out and mock scenarios

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

rms1000watt avatar
rms1000watt

I’m on the same page with you with your analysis so far. The dark spot in my mind is how Deregistration Delay works after a pod is marked as terminating

rms1000watt avatar
rms1000watt

(proc is killed by itself naturally by SIGTERM or forced by SIGKILL)

rms1000watt avatar
rms1000watt

actually, I think the TG stops routing to a terminating/deregistering instance

rms1000watt avatar
rms1000watt

but the deregistration delay keeps the current connections alive for up to XXseconds

rms1000watt avatar
rms1000watt

If that’s the case, no big deal

rms1000watt avatar
rms1000watt

I think the most important thing is aws-alb-ingress-controller to update ASAP once a pod is marked as terminatng

rms1000watt avatar
rms1000watt

so it can be marked as deregistering in the TG

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

When a pod is marked as Terminating, we’re in between that SIGTERM and SIGKILL limbo. We still have connections from client-ALB-pod.

Now, after Deregistration Delay the ALB forcefully kills all connections from the client to the pod.

That’s helpful if say your app cannot die gracefully if there are still active connections

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Or at least that’s how I understand it based on the above link

rms1000watt avatar
rms1000watt

yeah

rms1000watt avatar
rms1000watt

makes sense

rms1000watt avatar
rms1000watt

terminating grace period length > deregistration delay length

Since there’s a lag waiting for AWS alb ingress controller to tell the TG that a target is deregistering

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I think that is correct, yup.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Also, don’t forget about the ALB Ingress Controller loop.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

That happens every 10s I think. And if it ran just before your pod switched to Terminating…

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Again, drawing and testing all the situations( or the most important ones) helps a lot

rms1000watt avatar
rms1000watt

yeah, I agree

rms1000watt avatar
rms1000watt
➜ ~ kubectl get nodes | grep fargate

Interesting seeing Fargate EKS assigning ec2 instances?

fargate-ip-xxx-xxx-xxx-xxx.ec2.internal
bradym avatar

I just assume that everything runs on ec2 instances

2020-04-08

pecigonzalo avatar
pecigonzalo
kubectl diff does apply · Issue #89762 · kubernetes/kubernetes

What happened: kubectl diff modify my deployements. What you expected to happen: I expect the diff command to not change my deployments ! How to reproduce it (as minimally and precisely as possible…

2020-04-09

David Hubbell avatar
David Hubbell

Any opinions on kube-aws vs kops?

David Hubbell avatar
David Hubbell

(for provisioning in AWS)

David Hubbell avatar
David Hubbell

I created a cluster with kube-aws yesterday and it wasn’t too bad. Now getting recommendations to use kops from someone that used it 2 years ago

Zachary Loeber avatar
Zachary Loeber

I used kops like 2 years ago as well, it seemed ok but if you are going to deploy managed clusters and still use cli scripts to do so eksctl seems the way to go.

Zachary Loeber avatar
Zachary Loeber

I question the longevity of a solution based on such scripts though.

Zachary Loeber avatar
Zachary Loeber

though kops can generate terraform configurations, cool beans - https://github.com/kubernetes/kops/blob/master/docs/terraform.md

kubernetes/kops

Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management - kubernetes/kops

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe the time for kops on AWS has come and gone. It’s moving slower and alternatives have caught up. Now with AWS supporting fully managed node pools, EKS is the way to go.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve switched over to deploying EKS for all new engagements.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Up until the managed node groups, I was on the fence as to the right way to go.

Zachary Loeber avatar
Zachary Loeber

I’m curious if there are any workloads which you might recommend self-managed clusters for at this point?

David Hubbell avatar
David Hubbell

EKS is not FedRamp compliant (yet) and so the recommendation (from AWS) is to run K8s manually on EC2 until compliance is reached. As a result, eksctl is out as an option

Pierre Humberdroz avatar
Pierre Humberdroz

Also my Issue with EKS is that they lag super behind the k8s release cycle

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(so has kops historically)

Pierre Humberdroz avatar
Pierre Humberdroz

Oh okay did not know. I am new to aws after all and never had to deal with unmanaged clusters

Juan Soto avatar
Juan Soto

which version of k8s is running eks ?

Pierre Humberdroz avatar
Pierre Humberdroz

1.15

Juan Soto avatar
Juan Soto

ho that’s very old.

Pierre Humberdroz avatar
Pierre Humberdroz

Yep. It also only was just added in march

Pierre Humberdroz avatar
Pierre Humberdroz
[EKS]: Support for Kubernetes 1.16 · Issue #487 · aws/containers-roadmap

Tell us about your request Support for Kubernetes 1.16 Changelog Release Announcement Which service(s) is this request for? EKS Tell us about the problem you're trying to solve. What are you tr…

2020-04-10

2020-04-14

rms1000watt avatar
rms1000watt

Anyone have issues using

service.beta.kubernetes.io/aws-load-balancer-type: nlb

attached to their service.. for a bunch of services.. then all your security group rules get consumed on the EKS nodes SG?

joey avatar

no, but you’ve piqued my interest.

joey avatar

the SG rules for the NLB specifically are getting added to node-port-level? how were you specifying your SG rules for this NLB?

rms1000watt avatar
rms1000watt
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-internal: true
service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-00000000
rms1000watt avatar
rms1000watt

these are the only annotations I use

rms1000watt avatar
rms1000watt

list of annotations

1
rms1000watt avatar
rms1000watt

i want to disable the sg rule addition somehow

rms1000watt avatar
rms1000watt

this is what its doing to the SG for the EKS nodes

rms1000watt avatar
rms1000watt
joey avatar

interesting. i had not stumbled upon this yet but i haven’t been using security groups on my (public) NLB. great to know though.

rms1000watt avatar
rms1000watt

these are private NLBs

rms1000watt avatar
rms1000watt

hmm, i wonder if that makse a difference

rms1000watt avatar
rms1000watt

All I can do is try

joey avatar

i suspect not. logically it makes sense to me though that if you’re applying a SG, it’s going to lock down the port on the node that’s frontending the service.

rms1000watt avatar
rms1000watt

here’s the problem tho, if you don’t specify an SG, it’ll grab one anyways

rms1000watt avatar
rms1000watt
grabcreate
rms1000watt avatar
rms1000watt

Oh, the terminology is NodePort

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use the nlb mode by default with that annotation. Haven’t been bothered by the rule additions. Why fight it?

rms1000watt avatar
rms1000watt

we maxed out on inbound security group rules

rms1000watt avatar
rms1000watt

lol

rms1000watt avatar
rms1000watt

only 60 rules per SG

rms1000watt avatar
rms1000watt

each NLB’s nodeport is making 2 entries in there

rms1000watt avatar
rms1000watt
app-1  LoadBalancer   172.20.179.5  00000000000000-00000000000000.elb.us-west-2.amazonaws.com      3000:32043/TCP        10d

SG rules get added for like.. port 32043

(even though I already have rules that don’t require this..)

rms1000watt avatar
rms1000watt

I guess the question is.. how can i stop these inbound rule additions on the SG used for the EKS nodes?

EDIT:

Solution.. just use classic LB

2020-04-16

Marcin Brański avatar
Marcin Brański

I’ve seen two ingresses using same DNS domain but different paths and different nginx-ingress annotations. Is that supported? Will one ingress be used or will somehow nginx-ingress merge them? I’m not sure how will nginx resolve paths when they overlap, ex one ingress is using /v4 and second /v4/api_xxx .

Zachary Loeber avatar
Zachary Loeber

I don’t believe that will work. one of the two load-balancers that back the ingress would need to be hit first based on how DNS works (unless you have some upstream traffic routing mechanism)

1
Vikram Yerneni avatar
Vikram Yerneni

Anyone facing issue with Service name is not being picked up when deploying a helm chart on Kubernetes and service is getting created with random naming scheme??

Pierre Humberdroz avatar
Pierre Humberdroz

what is random for you?

Vikram Yerneni avatar
Vikram Yerneni
Service Name is not getting created · Issue #21973 · helm/charts

charts/stable/grafana/values.yaml Line 115 in efd0f2c service: When using this Grafana Helm Cart for deploying into EKS Cluster, I did added a service name and somehow, then ma his not being picked…

Vikram Yerneni avatar
Vikram Yerneni

This is what I am talking about

Pierre Humberdroz avatar
Pierre Humberdroz

but it is indented for you ?

Vikram Yerneni avatar
Vikram Yerneni

Yeah yea of course… Within my myvalues.yaml file its indented right

Pierre Humberdroz avatar
Pierre Humberdroz

This is how the service name gets defined you need to set fullnameOverride

Vikram Yerneni avatar
Vikram Yerneni

Ok… SO the service name is defined with an override. In this case defining the “grafana.fullname” within service section should fix the issue…

Vikram Yerneni avatar
Vikram Yerneni

Am i saying it right?

Pierre Humberdroz avatar
Pierre Humberdroz

no you can not set the service name on its own.

Pierre Humberdroz avatar
Pierre Humberdroz

You can only override the name for all manifests

Pierre Humberdroz avatar
Pierre Humberdroz

I am wondering why you would not just take the default?

Vikram Yerneni avatar
Vikram Yerneni

Well, the reason why I dont want to take the defalut is, I am having an issue when I am setting the Ingress for the same (Grafana Service). I do see a name mismatch here cause I need to define the service name within the Ingress configuration before I deploy the service

Pierre Humberdroz avatar
Pierre Humberdroz

Why would you not enabled the ingress of the helm chart?

Pierre Humberdroz avatar
Pierre Humberdroz
helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

Vikram Yerneni avatar
Vikram Yerneni

service: name: svc-grafana namespace: kube-system type: ClusterIP port: 80 targetPort: 3000 annotations: {} labels: {} portName: service

ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/load-balancer-attributes: ‘routing.http2.enabled=true,idle_timeout.timeout_seconds=600,deletion_protection.enabled=true’ alb.ingress.kubernetes.io/certificate-arn: certname alb.ingress.kubernetes.io/listen-ports: ‘[{“HTTP”: 80}, {“HTTPS”:443}]’ alb.ingress.kubernetes.io/actions.ssl-redirect: ‘{“Type”: “redirect”, “RedirectConfig”: { “Protocol”: “HTTPS”, “Port”: “443”, “StatusCode”: “HTTP_301”}}’ name: grafana-ingress namespace: kube-system service: annotations: alb.ingress.kubernetes.io/target-type: ip labels: {} path: /* hosts: - grafana.company.com ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. extraPaths: - path: backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: serviceName: svc-grafana servicePort: 80

Vikram Yerneni avatar
Vikram Yerneni

"service: name: svc-grafana namespace: kube-system type: ClusterIP port: 80 targetPort: 3000 annotations: {} labels: {} portName: service

ingress: enabled: true annotations: [kubernetes.io/ingress.class](http://kubernetes.io/ingress.class): alb [alb.ingress.kubernetes.io/scheme](http://alb.ingress.kubernetes.io/scheme): internet-facing [alb.ingress.kubernetes.io/load-balancer-attributes](http://alb.ingress.kubernetes.io/load-balancer-attributes): 'routing.http2.enabled=true,idle_timeout.timeout_seconds=600,deletion_protection.enabled=true' [alb.ingress.kubernetes.io/certificate-arn](http://alb.ingress.kubernetes.io/certificate-arn): certname [alb.ingress.kubernetes.io/listen-ports](http://alb.ingress.kubernetes.io/listen-ports): '[{"HTTP": 80}, {"HTTPS":443}]' [alb.ingress.kubernetes.io/actions.ssl-redirect](http://alb.ingress.kubernetes.io/actions.ssl-redirect): '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' name: grafana-ingress namespace: kube-system service: annotations: [alb.ingress.kubernetes.io/target-type](http://alb.ingress.kubernetes.io/target-type): ip labels: {} path: /* hosts: - [grafana.company.com](http://grafana.company.com) ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. extraPaths: - path: backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: serviceName: svc-grafana servicePort: 80"

Pierre Humberdroz avatar
Pierre Humberdroz

can you wrap that in

`

``

Vikram Yerneni avatar
Vikram Yerneni
service:
  name: svc-grafana
  namespace: kube-system
  type: ClusterIP
  port: 80
  targetPort: 3000
  annotations: {}
  labels: {}
  portName: service

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=600,deletion_protection.enabled=true'
    alb.ingress.kubernetes.io/certificate-arn: certname
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
  name: grafana-ingress
  namespace: kube-system
  service:
    annotations:
      alb.ingress.kubernetes.io/target-type: ip
  labels: {}
  path: /*
  hosts:
    - grafana.company.com
  ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
  extraPaths:
    - path:
      backend:
        serviceName: ssl-redirect
        servicePort: use-annotation
    - path: /*
      backend:
        serviceName: svc-grafana
        servicePort: 80
Vikram Yerneni avatar
Vikram Yerneni

Sorry,, here u go

Pierre Humberdroz avatar
Pierre Humberdroz

and where are you trying to reference the service name?

Vikram Yerneni avatar
Vikram Yerneni

If u look at the “name” under Service configuration and the “servicename” under Ingress configuration, the parameters are different and thats why I want to control the service name so that it can be set under the Ingress.

Vikram Yerneni avatar
Vikram Yerneni

I am trying to refer the servicename under:

service:
  name: svc-grafana
Pierre Humberdroz avatar
Pierre Humberdroz

you do not need to define the grafana path afaik

Vikram Yerneni avatar
Vikram Yerneni

After I deploy the configuration, here is the error I am getting in the alb-ingress-configuration pod logs:

E0416 19:39:58.180001       1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to load serviceAnnotation due to no object matching key \"kube-system/svc-grafana\" in local store"  "controller"="alb-ingress-controller" "request"={"Namespace":"kube-system","Name":"grafana-1587065956"}
Pierre Humberdroz avatar
Pierre Humberdroz

You should be able to remove the grafana service declaration

Vikram Yerneni avatar
Vikram Yerneni

ho….

Vikram Yerneni avatar
Vikram Yerneni

So, I am guessing to get rid of the whole section:

  extraPaths:
    - path:
      backend:
        serviceName: ssl-redirect
        servicePort: use-annotation
    - path: /*
      backend:
        serviceName: svc-grafana
        servicePort: 80
Vikram Yerneni avatar
Vikram Yerneni

no need for it then….

Pierre Humberdroz avatar
Pierre Humberdroz

I am not sure why you have added it.

Pierre Humberdroz avatar
Pierre Humberdroz

So it might be yes

Vikram Yerneni avatar
Vikram Yerneni

I was configuring it based on templates I got from github and AWS ALB Ingress sections, Pierre…

Vikram Yerneni avatar
Vikram Yerneni

Let me remove it and will see if it deploys

Pierre Humberdroz avatar
Pierre Humberdroz

sure let me know. Happy to help.

Vikram Yerneni avatar
Vikram Yerneni

This time, it throwed a new error:

I0416 20:00:22.317225       1 tags.go:43] kube-system/grafana-1587067180: modifying tags {  ingress.k8s.aws/stack: "kube-system/grafana-1587067180",  kubernetes.io/service-name: "grafana-1587067180",  kubernetes.io/service-port: "80",  ingress.k8s.aws/resource: "kube-system/grafana-1587067180-grafana-1587067180:80",  kubernetes.io/cluster/cluster_name: "owned",  kubernetes.io/namespace: "kube-system",  kubernetes.io/ingress-name: "grafana-1587067180",  ingress.k8s.aws/cluster: "cluster_name"} on arn:aws:elasticloadbalancing:AWSSetup
Vikram Yerneni avatar
Vikram Yerneni

ooh sorry, nevermind. Thats not an error. However, the original error still persists

Pierre Humberdroz avatar
Pierre Humberdroz

hard to say / judge what might be going on ..

Vikram Yerneni avatar
Vikram Yerneni
E0416 20:00:22.364011       1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to grafana-1587067180 service is not of type NodePort or LoadBalancer and target-type is instance"  "controller"="alb-ingress-controller" "request"={"Namespace":"kube-system","Name":"grafana-1587067180"}
Vikram Yerneni avatar
Vikram Yerneni

sorry Pierre. the above one is the error I am getting

Vikram Yerneni avatar
Vikram Yerneni

I will dig in more…

Pierre Humberdroz avatar
Pierre Humberdroz

The error says that you can not use server.type: ClusterIP so if you would like to use a load balancer you have to change the type

Pierre Humberdroz avatar
Pierre Humberdroz

So it might work if you set it to: LoadBalancer

Vikram Yerneni avatar
Vikram Yerneni

But, without defining the ClusterIP, how does the service will be created with the proper setup for exterrnal or even internal accessing??

Vikram Yerneni avatar
Vikram Yerneni

either way, let me try to change the server.type and will see what it does

Vikram Yerneni avatar
Vikram Yerneni

Actually, I am guessing I need to add this

service.loadBalancerIP	IP address to assign to load balancer (if supported)	nil
Pierre Humberdroz avatar
Pierre Humberdroz

you can not create an ALB for a ClusterIP Service if you would like to use a Load Balancer you will need to switch the service type.

Vikram Yerneni avatar
Vikram Yerneni

ooh wow..

Vikram Yerneni avatar
Vikram Yerneni

ok ok

Vikram Yerneni avatar
Vikram Yerneni

let me change the server type to loadbalancer and see what it does then

Vikram Yerneni avatar
Vikram Yerneni

Son a gun… It worked…

Vikram Yerneni avatar
Vikram Yerneni

Boy, u r amazing…!!!

Vikram Yerneni avatar
Vikram Yerneni

I truly apprecate your help here Pierre!!!!

Pierre Humberdroz avatar
Pierre Humberdroz

no worries

2020-04-17

Zachary Loeber avatar
Zachary Loeber

curious if anyone has taken a look at Keptn yet, https://keptn.sh/

Keptn - A message-driven control plane for application delivery and automated operationsattachment image

Building the fabric for cloud-native lifecycle automation at enterprise scale

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
[EKS] [request]: Ability to configure pod-eviction-timeout · Issue #159 · aws/containers-roadmap

Tell us about your request I would like to be able to make changes to configuration values for things like kube-controller. This enables a greater customisation of the cluster to specific, bespoke …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

upvote!

1
David Scott avatar
David Scott

I found that all of my EKS clusters that were originally created on 1.11 are missing k get cm -n kube-system kube-proxy-config. The configmap is present on clusters created on later versions. The EKS update instructions only patch the image version in kube-proxy. Has anyone else dealt with this? I’m digging into it because I want to edit the metricsBindAddress to allow Prometheus to scrape kube-proxy.

2020-04-19

Sean Turner avatar
Sean Turner

I’m running into a bit of confusion. Does anything look glaringly out of place here?

For some reason, creating the internal NLB in AWS with the below yaml is using nodePorts. Is this normal? Trying to make spinnaker accessible over transit gateway but having difficulty

Zachary Loeber avatar
Zachary Loeber

you should be using an IP address within the kubernetes network range right?

2020-04-20

Zachary Loeber avatar
Zachary Loeber

now this is a landing page. https://oneinfra.net/

thumbsup_all1
Zachary Loeber avatar
Zachary Loeber

Just pretty waves and a single link to a github project

curious deviant avatar
curious deviant

Hello,

I am facing a dilemma that I am sure other folks must have come across.

So we have an application team deploying their service to our shared EKS cluster. The application is exposed externally via a CLB (this will be revisited in a month or so to replace with an API gateway etc.). The challenge I am facing is that the DNS and the Cert that this service manifest refers must be created via TF. Looks like there’s no way to tell a K8s service to use a particular LB as it’s load balancer. We have to go the other way round. Create the LB and refer that in TF to find the DNS details. This fails too so far. I am using aws_lb as a datasource and trying to read the zone id of the LB created by the K8s service. How have others solved for this please ?

1
Zachary Loeber avatar
Zachary Loeber

Got totally sidetracked today and ended up creating this little project. Setting up a local lab environment in Linux for CKA studies using terraform and libvirt: https://github.com/zloeber/k8s-lab-terraform-libvirt. It is just a nifty way to spin up 3 local ubuntu servers using terraform but fun nonetheless (well fun for me at least…)

zloeber/k8s-lab-terraform-libvirt

A Kubernetes lab environment using terraform and libvirt - zloeber/k8s-lab-terraform-libvirt

1
Pierre Humberdroz avatar
Pierre Humberdroz

this is cool! Maybe also something for #community-projects

zloeber/k8s-lab-terraform-libvirt

A Kubernetes lab environment using terraform and libvirt - zloeber/k8s-lab-terraform-libvirt

Zachary Loeber avatar
Zachary Loeber

thanks Pierre, I was surprised at how well it works

2020-04-21

Vikram Yerneni avatar
Vikram Yerneni

Helm/Stable/Prometheus Server Dashboard is exposed using alb-ingress controller. Somehow the prometheus webpage is not loading fully (few parts of the webpage are not getting loaded and throwing 404 errors). Here is the Ingress configuration

Vikram Yerneni avatar
Vikram Yerneni

ingress:   ## If true, Prometheus server Ingress will be created   ##   enabled: true

  ## Prometheus server Ingress annotations   ##   annotations:     kubernetes.io/ingress.class: ‘alb’     #kubernetes.io/tls-acme: ‘true’     alb.ingress.kubernetes.io/scheme: internet-facing     alb.ingress.kubernetes.io/load-balancer-attributes: ‘routing.http2.enabled=true,idle_timeout.timeout_seconds=60’     alb.ingress.kubernetes.io/certificate-arn: certname     alb.ingress.kubernetes.io/listen-ports: ‘[{“HTTP”: 80}, {“HTTPS”:443}]’     alb.ingress.kubernetes.io/actions.ssl-redirect: ‘{“Type”: “redirect”, “RedirectConfig”: { “Protocol”: “HTTPS”, “Port”: “443”, “StatusCode”: “HTTP_301”}}’   service:    annotations:     alb.ingress.kubernetes.io/target-type: ip   labels: {}   path: /*   hosts:     - prometheus.company.com

  ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.   extraPaths:    - path: /*     backend:      serviceName: ssl-redirect      servicePort: use-annotation

Vikram Yerneni avatar
Vikram Yerneni

ingress:   ## If true, Prometheus server Ingress will be created   ##   enabled: true

  ## Prometheus server Ingress annotations   ##   annotations:     [kubernetes.io/ingress.class](http://kubernetes.io/ingress.class): 'alb'     #[kubernetes.io/tls-acme](http://kubernetes.io/tls-acme): 'true'     [alb.ingress.kubernetes.io/scheme](http://alb.ingress.kubernetes.io/scheme): internet-facing     [alb.ingress.kubernetes.io/load-balancer-attributes](http://alb.ingress.kubernetes.io/load-balancer-attributes): 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60'     [alb.ingress.kubernetes.io/certificate-arn](http://alb.ingress.kubernetes.io/certificate-arn): certname     [alb.ingress.kubernetes.io/listen-ports](http://alb.ingress.kubernetes.io/listen-ports): '[{"HTTP": 80}, {"HTTPS":443}]'     [alb.ingress.kubernetes.io/actions.ssl-redirect](http://alb.ingress.kubernetes.io/actions.ssl-redirect): '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'   service:    annotations:     [alb.ingress.kubernetes.io/target-type](http://alb.ingress.kubernetes.io/target-type): ip   labels: {}   path: /*   hosts:     - [prometheus.company.com](http://prometheus.company.com)

  ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.   extraPaths:    - path: /*     backend:      serviceName: ssl-redirect      servicePort: use-annotation

Vikram Yerneni avatar
Vikram Yerneni
  ingress:
    ## If true, Prometheus server Ingress will be created
    ##
    enabled: true

    ## Prometheus server Ingress annotations
    ##
    annotations:
       kubernetes.io/ingress.class: 'alb'
       #kubernetes.io/tls-acme: 'true'
       alb.ingress.kubernetes.io/scheme: internet-facing
       alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60'
       alb.ingress.kubernetes.io/certificate-arn: certname
       alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
       alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    service:
      annotations:
        alb.ingress.kubernetes.io/target-type: ip
    labels: {}
    path: /*
    hosts:
       - prometheus.company.com

    ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
    extraPaths:
     - path: /*
       backend:
         serviceName: ssl-redirect
         servicePort: use-annotation
Vikram Yerneni avatar
Vikram Yerneni

Sorry for the mishap

Vikram Yerneni avatar
Vikram Yerneni

Anyone gone through this issue before fellas?

joey avatar

what’s the address for prometheus-server or grafana? configured as? does it match the url you’re using to hit the alb? if you look at inspect and see what the request host and uri is of the assets not being loaded, are you requesting the right resource?

Vikram Yerneni avatar
Vikram Yerneni

DO u mean the “hosts” section, Joey?

Vikram Yerneni avatar
Vikram Yerneni
hosts:
       - prometheus.company.com
Vikram Yerneni avatar
Vikram Yerneni

I just checked and I see the domain url I used under “hosts” section is the one I used and its the one beoing loaded

Vikram Yerneni avatar
Vikram Yerneni

However there are multiple redirects are happening

joey avatar

no, i mean the prometheus server dashboard or whatever service it is you’re hitting when you hit that ingress

joey avatar

i’m just wondering if the things that aren’t loading aren’t loading because you’re getting an incorrect url

Vikram Yerneni avatar
Vikram Yerneni

Yes, the prometheus server dashboard will be accessable by the url defined in hosts section and thats how you access it

Vikram Yerneni avatar
Vikram Yerneni

And thats where the issue is

Vikram Yerneni avatar
Vikram Yerneni

prometheus server dashboard is not getting loaded fully

joey avatar

if you open inspect mode in chrome or ff or whatever browser you’re using, for the objects that are not being loaded, is the host being requested the same as all the other assets?

Vikram Yerneni avatar
Vikram Yerneni

Yes, I used the developer tools and verified the domain names and its all using the proper domain name

Vikram Yerneni avatar
Vikram Yerneni

I fixed the issue… thanks joey

joey avatar

what was it?

Szymon avatar

hi, any idea how can I change language of Minikube CLI? Probably it gets the settings from my locale settings (PL), but I’d like to force english.

2020-04-22

Ben Read avatar
Ben Read

What’s your opinion of https://fission.io?

Serverless Functions for Kubernetes - Fission

Fission is a framework for serverless functions on Kubernetes. Write short-lived functions in any language, and map them to HTTP requests (or other event triggers). Deploy functions instantly with one command. There are no containers to build, and no Docker registries to manage.

maarten avatar
maarten

Follow

Serverless Functions for Kubernetes - Fission

Fission is a framework for serverless functions on Kubernetes. Write short-lived functions in any language, and map them to HTTP requests (or other event triggers). Deploy functions instantly with one command. There are no containers to build, and no Docker registries to manage.

Ben Read avatar
Ben Read

I’ve recieved some interesting comments in serverless-forum.slack.com about this

2020-04-23

Milosb avatar

Guys, I took over some k8s that I need to adjust. I see bunch of env variables. Its like 50+ per deployment manifest. I dont work so much with kubernetes, but it looks like overkill to me. What is best practice, should it be abstracted with config maps, any other recomendation or that approach is good?

mfridh avatar

When is it too much… 5? 10? 100?

It’s hard to say without knowing what those variables all mutate, which I assume is what they do.

If variables are mostly the same across many different deployments, having something that “generates” them on the fly based on some external source could be an abstraction which may or may not suit your taste …

If you prefer explicitness to carry all the way into the deployment manifests however, you probably want the dynamic generation to happen outside of Kubernetes - in some form of manifest generation..

I may be biased, but I really like chamber, regardless of Kubernetes or not.

https://github.com/segmentio/chamber

chamber exec path/to/global/variables path/to/deployment/specific/variables -- my-binary
segmentio/chamber

CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.

Zachary Loeber avatar
Zachary Loeber

If you convert the individual env vars into a config map your eyes will thank you when you have to look over the deployment manifests

Zachary Loeber avatar
Zachary Loeber

Plus, you can then look at controllers to auto restart your deployments when/if the configmaps change

mfridh avatar

That I can agree with for sure .

Zachary Loeber avatar
Zachary Loeber

chamber looks sweet, too bad it is provider specific

mfridh avatar

also.. you can combine it from several configmaps, right?

mfridh avatar

And then you could (if several deployment shares “environment globals” so to speak) - combine environment variables in a nice way. I think it’s described in the doc somewhere, let me see.

Zachary Loeber avatar
Zachary Loeber

Totally

Zachary Loeber avatar
Zachary Loeber

weirdly enough there are cases where you can benefit from multiple config maps

mfridh avatar

Thinking about something specific other than this “hierarchical” combination of globals/env/service?

Zachary Loeber avatar
Zachary Loeber

Nah, it technically boils down to your succinct statement

Zachary Loeber avatar
Zachary Loeber

I had one where the base deployment was an app that needed some variables based on the cluster which was getting deployed within the pipeline but they later wanted to push out specific updates to config elements that were client specific

Zachary Loeber avatar
Zachary Loeber

so, yeah, basically hierarchical combo

Milosb avatar

Thanks Guys, it was really helpfull

Milosb avatar

@Zachary Loeber Did you use any controller which tracks config-maps/secrets change and restart pods if there is a change?

Zachary Loeber avatar
Zachary Loeber

I did not unfortunately, seems pretty easy to do though.

Zachary Loeber avatar
Zachary Loeber

in lower environments I just didn’t see the need (my pipelines would always push more recent deployments based on either build id or git commit hash tagged containers)

Milosb avatar

I was able to utilize this one: https://github.com/pusher/wave

pusher/wave

Kubernetes configuration tracking controller. Contribute to pusher/wave development by creating an account on GitHub.

Milosb avatar

maybe there are alternatives

Zachary Loeber avatar
Zachary Loeber

This could be promising -> https://www.kubestack.com/

Home

Open source Gitops framework built on Terraform and Kustomize.

2020-04-24

Zachary Loeber avatar
Zachary Loeber

k8s-deployment-book, uses kustomize and kubecutr (a custom kube scaffolding tool by the same author) which may not be everyone’s thing but still worth a once over anyway as it is well thought out -> https://github.com/mr-karan/k8s-deployment-book

mr-karan/k8s-deployment-book

Kubernetes - Production Deployments for Developers (Book) - mr-karan/k8s-deployment-book

Christian Roy avatar
Christian Roy

Hi ppl! What do you use to keep your secret well… secrets … when it comes to your yaml files stored in a repo? Do you store them elsewhere? Do you use tools like sealed-secrets, helm-secrets or Kamus?

joey avatar

store your secrets in vault and grab them on startup

Christian Roy avatar
Christian Roy

Im not familiar with vault. Is that something that runs in k8s?

joey avatar
Vault by HashiCorpattachment image

Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, auditing, and provides secrets as a service through a unified API.

Christian Roy avatar
Christian Roy

If that runs on k8s, does that mean vault becomes the source of truth and you dont keep a copy anywhere?

Christian Roy avatar
Christian Roy

I dont want to commit the secrets in git (unless they are properly encrypted)

joey avatar

the way i’ve done it is running vault in a separate cluster/as it’s own piece of infrastructure, and then in my pods i have a vault agent init container that authenticates to vault, grabs secrets, and passes them in files or as environment variables to relevant containers

joey avatar

it’s not a trivial exercise, but it’s clean and can be provider agnostic, as opposed to using secret manager from $cloud_provider

Christian Roy avatar
Christian Roy

Im not as concerned about them being secured in k8s as having a secure copy elsewhere…

Christian Roy avatar
Christian Roy

Im using help charts for my apps…

Christian Roy avatar
Christian Roy

so I have a configmaps.yaml in that chart

joey avatar

have you checked out git-crypt for storing encrypted secrets in git?

Christian Roy avatar
Christian Roy

Not yet. I looked at sealed-secrets, helm-secrets or Kamus so far. I’ll check that out

Christian Roy avatar
Christian Roy

thanks

Adam Blackwell avatar
Adam Blackwell

Also using Hashicorp Vault and very happy with it, though pulling secrets into static files or environment variables with Kustomize has gotten a bit complicated.

bradym avatar

I’m using AWS SSM Parameter Store ( https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) which I like as it gives me a source of truth outside of the cluster.

We’re currently switching over to using helmfile (https://github.com/roboll/helmfile) which has built in support for retrieving values from SSM parameter store (and other systems, including vault) by using vals (https://github.com/variantdev/vals)

AWS Systems Manager Parameter Store - AWS Systems Manager

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management.

roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

variantdev/vals

Helm-like configuration values loader with support for various sources - variantdev/vals

1
joey avatar

nice, hadn’t seen vals.

Christian Roy avatar
Christian Roy

Thanks

bradym avatar

Sure, let me know if you decide to look into any of that stuff and have questions. I’m still pretty new to helmfile myself but happy to help if I can.

Ayman avatar

As for vault, someone just released some Terraform to deploy it on AWS: https://github.com/jcolemorrison/vault-on-aws

jcolemorrison/vault-on-aws

A secure Vault for secrets, tokens, keys, passwords, and more. Automated deployment with Terraform on AWS. Configurable options for security and scalability. Usable with any applications and se…

Ayman avatar

It is quite complicated though, so the folks at Segment created chamber as a result: https://github.com/segmentio/chamber

segmentio/chamber

CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.

Zachary Loeber avatar
Zachary Loeber

When on Azure and without a centralized Hashicorp Vault deployment I leaned on using keyvault with this project to auto-inject secrets using mutating admission webhooks: https://github.com/SparebankenVest/azure-key-vault-to-kubernetes

SparebankenVest/azure-key-vault-to-kubernetes

Azure Key Vault to Kubernetes (akv2k8s for short) makes it simple and secure to use Azure Key Vault secrets, keys and certificates in Kubernetes. - SparebankenVest/azure-key-vault-to-kubernetes

Zachary Loeber avatar
Zachary Loeber

the concept is similar for other operators as well though (hashicorp’s vault operator does the same I believe). That way you are not putting your secrets anywhere at all except in your secret store and pulling them into deployments if the cluster is authorized to do so.

Zachary Loeber avatar
Zachary Loeber

or you can pre-seed cluster secrets as well I suppose. I’ve done this as well but you are then pushing the responsibility for secrets deployment to an upstream pipeline (the one that creates your cluster generally).

github140 avatar
github140

How do you authenticate to vault and how to store those credentials?

joey avatar

service accounts

github140 avatar
github140

How do you protect/secure it?

Milosb avatar

I am working last couple od days with GoDaddy External secret implementation. You can integrate aws secret manager, parameter store or even vault with it.

Milosb avatar

I am pretty happy so far

Adam Blackwell avatar
Adam Blackwell

Does anyone have any stack graph-esque minikube development flows that they would recommend?

We’re using ArgoCD + smashing the sync button and I’ve looked at how https://garden.io/#iterative figured out smart local redeployments but I’d like to know how others are doing it and if our (certmanager->vault->mysql + elasticsearch) -> the actual app local dev deployment is abnormally complex or slow. (currently takes three syncs and ~8 minutes to go from minikube up to running.)

Gardenattachment image
GardenFaster Kubernetes development and testing

2020-04-25

2020-04-26

2020-04-27

rms1000watt avatar
rms1000watt

anyone have to fine tune Nginx for performance in k8s?

worker_processes 2;

events {
  worker_connections 15000;
}

For a 2 CPU container, with 65k file descriptor limit.. thinking this would be safe. I have a generous k8s HPA also, so maybe fine tuning is a frivolous exercise

2020-04-28

2020-04-30

Pierre Humberdroz avatar
Pierre Humberdroz

btw: https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md

Seems like ingress-nginx got a couple of bigger updates in the last days.

kubernetes/ingress-nginx

NGINX Ingress Controller for Kubernetes. Contribute to kubernetes/ingress-nginx development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Pierre Humberdroz thanks for sharing

kubernetes/ingress-nginx

NGINX Ingress Controller for Kubernetes. Contribute to kubernetes/ingress-nginx development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that is a long list

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anything jump out at you? mostly looks like bug fixes to me. no enhancements stand out that I want to try

bradym avatar


Helm chart stable/nginx-ingress is now maintained in the ingress-nginx repository
According to https://github.com/kubernetes/ingress-nginx/issues/5161 there is documentation in the works for migrating from chats/stable to the new location.

Helm chart TODO list · Issue #5161 · kubernetes/ingress-nginx

Rename chart to ingress-nginx add common labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx default backend should be disabled by default webhook should be enabl…

Pierre Humberdroz avatar
Pierre Humberdroz

yea stable/incubator helm repo are not longer supported

bradym avatar

Wow, I hadn’t seen that yet. That’s a big change.

jedineeper avatar
jedineeper

Is the a method to promote objects across api versions? Eg deployments have moved from extensions/v1beta1 to apps/v1. Can they be updated in place or do they need to be destroyed and recreated?

    keyboard_arrow_up