This week we’re releasing an official Helm Chart for Vault. Using the Helm Chart, you can start a Vault cluster running on Kubernetes in just minutes. This Helm chart will also be …
I am facing issue in TF 0.11.14 when I am creating multiple cluster
* module.eks.local.kubeconfig: local.kubeconfig: Resource 'aws_eks_cluster.eks' does not have attribute 'certificate_authority.0.data' for variable 'aws_eks_cluster.eks.*.certificate_authority.0.data'
Hi, anyone here have experience with Flux where it keeps re-applying manifests even if nothing was changed?
We’ve integrated it with Keycloak + Gatekeeper (kops + k8s dashboard)
Web platform and OpenSource solutions specialist
is there an easy way to grab the headers on the http request from one service to another in a k8s cluster
Like for observability?
Hello Kubernetes Community, A security issue has been found in the net/http library of the Go language that affects all versions and all components of Kubernetes. The vulnerabilities can result in a DoS against any process with an HTTP or HTTPS listener. Am I vulnerable? Yes. All versions of Kubernetes are affected. Go has released versions go1.12.8 and go1.11.13, and we have released the following versions of Kubernetes built using patched versions of Go. Kubernetes v1.15.3 - go1.12.9 Kub…
@Erik Osterman (Cloud Posse) but why?
and how do we have to treat? what’s the best way to do this? i think he meant that if you are on this level you are already accommodated…
kelsey has a theme going on right now to remove some of the sugar coating around kubernetes. it’s been touted up as this magical container platform that solve all our problems. the reality is that like any other piece of software you run, there are tradeoffs. one of the common best practices is to toss traditional DR out the window; no more treating servers (and services) as “pets”, instead treat them as cattle. the crude analogy is with cattle is if they get sick you put them down rather than spend thousands at the vet making them well again. with servers, it’s a little less crude. terminate them and move on. kubernetes makes that very easy, however, there’s still an operator responsible for kubernetes. it’s not “serverless”. So like the Rancher responsible for the cattle, we are in the end responsible the cluster. Not everything will be fully automated in an unattended fashion (or should be).
anyone not running k8s 1.13+ in production? now that CVEs aren’t being fixed in 1.11 what are y’all strategies? I feel like everyone I talk to is still on 1.11
EkS or kops?
We just upgraded our first kops clusters from 1.12 to the latest release in 1.13
how was the upgrade from 1.11 to 1.12?
Been testing that recently
Fairly mixed results in terms of predictably, at least on my side thus far
Though mit something I will be able to focus on, at least in the current company I’m at
is it possible to Clone existing Google cloud Kubernetes cluster using gcloud command line options? I see the documentation available for cloning existing cluster manually from GCP console (https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster#clone-existing-cluster)
Hi, I’m using nginx ingress controller to expose thanos sidecar, I have validated that service is setup correctly and it’s responding as expected, but when using Nginx I get 400 error:
00.00.00.00 - [00.00.00.00] - - [27/Aug/2019:23:58:28 +0000] "PROXY TCP4 00.00.00.00 00.00.00.00 44782 30226" 400 163 "-" "-" 0 0.000   - - - -...
(edited out the ip addresses)
this is the ingress config:
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: external-dns.alpha.kubernetes.io/hostname: foo.bar kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: GRPC nginx.ingress.kubernetes.io/ssl-redirect: "true" service.beta.kubernetes.io/aws-load-balancer-internal: "true" service.beta.kubernetes.io/aws-load-balancer-ssl-cert: foo creationTimestamp: "2019-08-27T23:52:59Z" generation: 1 labels: service: thanos-sidecar name: thanos-sidecar namespace: monitoring resourceVersion: "foo" selfLink: /apis/extensions/v1beta1/namespaces/monitoring/ingresses/thanos-sidecar uid: foo spec: rules: - host: foo.bar http: paths: - backend: serviceName: thanos-sidecar servicePort: grpc status: loadBalancer: ingress: - hostname: foo.bar
Enabled TLS and still getting 400 errors
00.00.00.00 - [00.00.00.00] - - [28/Aug/2019:02:13:40 +0000] "PRI * HTTP/2.0" 400 163 "-" "-" 0 0.002   - - - -
I haven’t looked at the material, but I saw it advertised as “agnostic” which is really nice
Linked it to my team this morning, I’ll pass on any feedback I get if they try it
HashiCorp has finished work on Consul 1.6 and offered a first insight on upcoming Vault features aimed at users of container orchestrator Kubernetes.
@Robert has joined the channel