#kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2019-06-30

abkshaw

I am stuck while installing Kubeadm in AWS on Amazon linux

Below is the error i get after running sudo install -y kubeadm

<http [Errno -1] repomd.xml signature could not be verified for kubernetes

Please help me to get over it.

Glenn J. Mason

Hey @abkshaw, sounds a little similar to https://github.com/kubernetes/kubernetes/issues/60134

Got "repomd.xml signature could not be verified for kubernetes" error when installing Kubernetes from yum repo on Amazon Linux 2 · Issue #60134 · kubernetes/kubernetes

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug What happened: I&#39;m trying to install Kubernetes on Amazon Linux 2 as described here, but I get error: [[email protected] ~]$ sudo yum install …

1
abkshaw

@Glenn J. Mason Thanks a lot .. It’s exactly what i am searching for

2

2019-06-28

Erik Osterman
kubernetes/community

Kubernetes community content. Contribute to kubernetes/community development by creating an account on GitHub.

Erik Osterman
4

2019-06-25

2019-06-24

Ayo Bami
11:08:17 PM

HI guys, Please could someone help me . I just created an EKS cluster and its unable to apply some changes to the cluster. I keep getting that error log.. I am using the same user I used to create the cluster. I am also using auth account. So the Users are not exactly in that account they assume role. Am not sure what am missing here being trying this for days now.. Thanks

Erik Osterman

How did you bring up the cluster? Are you using terraform?

Ayo Bami

@Erik Osterman with terraform

Ayo Bami
howdio/terraform-aws-eks

Terraform module which creates EKS resources on AWS - howdio/terraform-aws-eks

nutellinoit
Managing Users or IAM Roles for your Cluster - Amazon EKS

The aws-auth ConfigMap is applied as part of the guide which provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a sample Kubernetes application. It is initially created to allow your worker nodes to join your cluster, but you also use this ConfigMap to add RBAC access to IAM users and roles. If you have not launched worker nodes and applied the

nutellinoit

point 3

Ayo Bami

@nutellinoit Its a new cluster. If my user can’t access the cluster, not sure if it can run aws-auth on the cluster if it can’t access it

Ayo Bami

We use Auth to manage IAM, The accounts are not directly in the cluster.

2019-06-23

sweetops

Hey @cabrinha I’d be interested in hearing anything you ran into with aws-okta and EKS. I’m going to be starting down that road this week.

2019-06-20

Nikola Velkovski
hjacobs/kubernetes-failure-stories

Compilation of public failure/horror stories related to Kubernetes - hjacobs/kubernetes-failure-stories

Erik Osterman

it’s worth resharing

hjacobs/kubernetes-failure-stories

Compilation of public failure/horror stories related to Kubernetes - hjacobs/kubernetes-failure-stories

Nikola Velkovski

Nikola Velkovski

I got to the spotify video, which is kinda cool, they admit their rookie mistakes around terraform

Ribhararnus Pracutiar

Hi guys, how to connect vpn from inside pods? any recommendation out there? So basically, I have to connect client data on premise, only using 1 ip

cabrinha

is anyone here using aws-okta with EKS? I’m having trouble granting additional roles access to the cluster.

2019-06-19

anyone using this on their clusters? https://github.com/buzzfeed/sso

buzzfeed/sso

sso, aka S.S.Octopus, aka octoboi, is a single sign-on solution for securing internal services - buzzfeed/sso

sweetops

Looks interesting

Erik Osterman

Doesn’t support websockets, so it was a deal breaker for us

Erik Osterman

things like the k8s dashboard or grafana require that

Erik Osterman

bite the bullet. just deploy KeyCloak with Gatekeepers

havent heard of keycloak/gatekeeper

Erik Osterman

I can give you a demo

Erik Osterman

it’s open source, by redhat

Erik Osterman

we have the helmfiles for it too

does it integrate w/google saml?

Erik Osterman

yup, that’s the beauty with keycloak

Erik Osterman

it basically supports every saml provider

yeah it looks like

Erik Osterman

and we use it with gsuite

Erik Osterman

not only that, you an use it with https://github.com/mulesoft-labs/aws-keycloak

mulesoft-labs/aws-keycloak

aws-vault like tool for Keycloak authentication. Contribute to mulesoft-labs/aws-keycloak development by creating an account on GitHub.

Erik Osterman

with aws

Erik Osterman

it can become the central auth service for everything

nice

Erik Osterman

we use it with kubernetes, teleport, atlantis, grafana, etc

yeah super nice, fully integrated

Erik Osterman

you can even integrate it with multiple auth providers at the same time

do helmfiles have a remote “helm chart” that is used as the base?

Erik Osterman

not sure, better to check in #helmfile

ah yeah it does

cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Erik Osterman

oh, “base” is a loaded term now

Erik Osterman

since helmfile has the concept of bases

Erik Osterman

this is not a base in that sense

Erik Osterman

like uh

base docker image base

Erik Osterman

aha

Erik Osterman

yes, we use the community image

Erik Osterman

but guess it would be more secure to run our own

Erik Osterman

given the role this plays

i guess im not being clear enough

i was curious if the helm chart is just abstracted out

for a helmfile

Erik Osterman

for the gatekeeper, we’re doing something a bit unusual/clever

Erik Osterman

we are defining an environment

Erik Osterman

then using that to generate a release for each service in the environment

Erik Osterman

the alternative is to use sidecars or automatic sidecar injetion

Erik Osterman
stakater/ProxyInjector

A Kubernetes controller to inject an authentication proxy container to relevant pods - [✩Star] if you’re using it! - stakater/ProxyInjector

ah

Erik Osterman

Here’s what our environments file looks like

Erik Osterman
services:
  - name: dashboard
    portalName: "Kubernetes Dashboard - Staging"
    host: [dashboard.xx-xxxxx-2.staging.xxxxxx.io](http://dashboard.xx-xxxxx-2.staging.xxxxxx.io)
    useTLS: true
    skipUpstreamTlsVerify: true
    upstream: <https://kubernetes-dashboard.kube-system.svc.cluster.local>
    rules:
      - "uri=/*|roles=kube-admin,dashboard|require-any-role=true"
    debug: false
    replicas: 1
  - name: forecastle
    host: [portal.xx-xxxx-2.xxxx.xxxx.io](http://portal.xx-xxxx-2.xxxx.xxxx.io)
    useTLS: true
    upstream: <http://forecastle.kube-system.svc.cluster.local>
    rules:
      - "uri=/*|roles=kube-admin,user,portal|require-any-role=true"
...

i see

2019-06-18

Hugo Lesta

Hello @davidvasandani thanks for the article that you’ve written. Could you please tell me the main capabilities that trafeik have as ingress-controller? Do you have any article with this capabilities?

davidvasandani

Hi @Hugo Lesta. Not my article but Traefik has many capabilities. https://docs.traefik.io/configuration/backends/kubernetes/

Hugo Lesta

This previous article you sent me seems worthy for me, I’ll try to improve my knowledhe about traefik over k8s.

1
davidvasandani

Its good and helped me out but its incomplete. The author mentions using LoadBalancer locally but doesn’t describe how. With a lot of additional work I’ve gotten it working with MetalLB locally. This was a very useful article: https://medium.com/@JockDaRock/kubernetes-metal-lb-for-docker-for-mac-windows-in-10-minutes-23e22f54d1c8

2019-06-17

2019-06-16

2019-06-13

what ingress controller are you guys using? it seems like alb-ingress-controller isnt quite robust enough for me. things that i feel like its missing:

  1. new ingress object = new ALB so there would be a one-to-one mapping of ALBs to services for me (multi-tenant cluster)
  2. provisioned resources don’t get cleaned up, at this point i feel like i might want to terraform the load balancer resources i need with the cluster
Erik Osterman

yea, the 1:1 mapping between ingress an ALB sucks!

maarten

Im using Ambassador.. a lot of features regarding routing of traffic based on any kind headers, regex matching, Jaeger tracing. Name it :)

@maarten does ambassador spin up cloud resources for u? (load balancers, security groups, etc)

i realized that might not be a feature i want as of now in k8s. since terraform is better at managing cloud resource state

davidvasandani

Can someone point me to best practices for setting up Traefik/Nginx-Proxy/etc as an ingress for Kubernetes running on 80? Everything is running but ClusterIP is internal and NodePort doesn’t allow ports below 30000. What am I missing?

kskewes

Service of type Loadbalancer. Then cloud provider gives you IP or use something like metallb on bare metal. Deployment nginx ingress or whatever. Can replicate per AZ.

davidvasandani

metallb was exactly what I needed. Thanks @kskewes

1
davidvasandani
Kubernetes & Traefik locally with a wildcard certificate

As a passionate software engineer at Localz, I get to tinker with fancy new tools (in my own time) and then annoy my coworkers by…

davidvasandani

but he’s using a LoadBalancer w/ Docker for Mac Kubernetes which doesn’t make sense.

2019-06-12

maarten

Hi all! Has someone faced this error before?

kernel:[22989972.720097] unregister_netdevice: waiting for eth0 to become free. Usage count = 1

Erik Osterman

Public #office-hours starting now! Join us on Zoom if you have any questions. https://zoom.us/j/684901853

2019-06-11

Sandeep Kumar

Hey Guys Does anyone configured SMTP as a grafana config-map for kubernetes?

Erik Osterman

Don’t have first hand experience

Erik Osterman

Let me know if you get it working though. We should setup the same in our helmfile.

Sandeep Kumar

Sure Erik

Sandeep Kumar

apiVersion: v1 kind: ConfigMap metadata: labels: app: grafana name: grafana-smtp-config-map namespace: monitoring data: grafana.ini: | enabled =true host=<host> user=<user> password=<password> skip_verify= false from_address=<email> from_name=Grafana welcome_email_on_sign_up=false

Sandeep Kumar

Ex: something like this

Sandeep Kumar

and adding this config map in kubernetes grafana deployment - configMap: defaultMode: 420 name: grafana-smtp-config-map name: grafana-smtp-config-map

Sandeep Kumar

i am trying using above methods to add smtp to grafana.ini

Sandeep Kumar

but i am unable to add smtp to grafana.ini, is there any documentation/suggestions which can help me here?

timduhenchanter

Does anyone have any experience scaling with custom metrics from Datadog across namespaces (or the external metrics API in general)?

timduhenchanter
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: service-template
spec:
  minReplicas: 1
  maxReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: service-template
  metrics:
  - type: External
    external:
      metricName: k8s.kong.default_service_template_80.request.count
      metricSelector:
        matchLabels:
            app: kong
      targetAverageValue: 5
timduhenchanter
  Warning  FailedGetExternalMetric       117s (x40 over 11m)  horizontal-pod-autoscaler  unable to get external metric default/k8s.kong.default_service_template_80.request.count/&LabelSelector{MatchLabels:map[string]string{app: service-template,},MatchExpressions:[],}: no metrics returned from external metrics API
timduhenchanter

^ perm issue with Datadog API in the cluster-agent

Alex Co

hi, anyone here is using Gloo Gateway on K8s ?

Alex Co

i’m having a problem that the virtual service stopped accepting traffic after awhile, and status on the ELB to gloo gateway proxy show that it ’s OutOfService

Alex Co

wonder if anyone here got the same problem

2019-06-10

Nikola Velkovski

@rj I saw that as well but I didn’t know if it was any good.

1

2019-06-09

@Nikola Velkovski give a try with rancher. It is the most easiest way to spin up k8s on multiple clouds as per our experience with the tool. https://rancher.com/

Run Kubernetes Everywhere

Rancher, open source multi cluster management platform, makes it easy for operations teams to deploy, manage and secure Kubernetes everywhere. Request a demo!

2019-06-08

Nikola Velkovski

that sounds a lot like aws Elasticsearch @btai terraform apply usually times out when upgrading the ES cluster

Nikola Velkovski

thanks!

2019-06-07

Nikola Velkovski

Hi People , do you know of a best/sane way to install k8s on AWS. I see that there are multiple ways to do it. I am eyeing kops because terraform duh but before creating the cluster there’s still a lot of preparation to do like:

- creating vpc

- kops state bucket

- route53 record And than all of it has to be passed on to kops as a cli command. This is all fine but to me it looks like a bit too much. Is there any other way of doing it ?

aaratn

I use EKS

aaratn

with terraform

Nikola Velkovski

ok that’s a way

Nikola Velkovski

and how do you handle upgrades, i read somewhere that it’s a bit tricky with EKS

aaratn

You mean master version upgrade ?

Nikola Velkovski

like k8s 1.2 -> 1.3 upgrade

aaratn

Yeah, well its a pretty new cluster

aaratn

right now my cluster is running on version 1.2

aaratn

what are the challenges that you have heard of ?

Nikola Velkovski

I don’t remember the details but I think Erik mentioned something about the upgrade in EKS is not as easy

Nikola Velkovski

I might be wrong though

aaratn
Making Cluster Updates Easy with Amazon EKS | Amazon Web Services

Kubernetes is rapidly evolving, with frequent feature releases, functionality updates, and bug fixes. Additionally, AWS periodically changes the way it configures Amazon Elastic Container Service for Kubernetes (Amazon EKS) to improve performance, support bug fixes, and enable new functionality. Previously, moving to a new Kubernetes version required you to re-create your cluster and migrate your […]

aaratn

aws blog says its easy

Nikola Velkovski

:)))

Nikola Velkovski

fair enough

aaratn

offcourse we have multiple environments

aaratn

so we can upgrade the lower environment and check if it works

aaratn

and proceed with upgrade

Nikola Velkovski

niice

Tim Malone

they made it much easier recently - you can do it via the AWS console, just change the version

Tim Malone

then upgrade your worker nodes afterwards

Tim Malone

(but yes you’ll want to do it in non-prod first just in case)

aaratn

Terraform has a parameter for version

Nikola Velkovski

oh that sounds promising

aaratn
version – (Optional) Desired Kubernetes master version. If you do not specify a value, the latest available version at resource creation is used and no upgrades will occur except those automatically triggered by EKS. The value must be configured and increased to upgrade the version when desired. Downgrades are not supported by EKS.
Nikola Velkovski

and what about installing it, EKS gives you the master nodes only, what about getting the other nodes in EC2, is it just a matter of using cloud-init ?

aaratn

We do with with auto-scaling-group

aaratn

You can follow this

Nikola Velkovski

Thanks a lot people

Nikola Velkovski

I will try it out

nutellinoit

i tried an upgrade with eks and terraform

nutellinoit

from 1.11 to 1.12

nutellinoit

is pretty smooth

nutellinoit

control plane upgrades without downtime

nutellinoit

to upgrade the workers the only thing to do is to update amis

nutellinoit

and replace workers

nutellinoit

and follow the directions on aws documentation to patch system deployment with new container versions

nutellinoit
Updating an Amazon EKS Cluster Kubernetes Version - Amazon EKS

When a new Kubernetes version is available in Amazon EKS, you can update your cluster to the latest version. New Kubernetes versions introduce significant changes, so we recommend that you test the behavior of your applications against a new Kubernetes version before performing the update on your production clusters. You can achieve this by building a continuous integration workflow to test your application behavior end-to-end before moving to a new Kubernetes version.

Nikola Velkovski

ok, so I guess there’s a posibility to automate replacing the instances in the AS somehow

Nikola Velkovski

I will look into it

Nikola Velkovski

Thanks for your support people!

Nikola Velkovski

nutellinoit

You can simply terminate one old instance at time and wait for autoscaling group to launch replacements

Nikola Velkovski

arhg ye gute ole click-ops

Nikola Velkovski

we’ve developed a lambda with step functions that does the instance replacement, step functions serving as a waiter

1
Nikola Velkovski

so it’s fire and forget

Nikola Velkovski

takes a while but it’s atomic

@Nikola Velkovski I’ve found the k8s upgrades to be a bit slow. it increases in time (by like 5~7 minutes per worker node) so upgrades can take a long time. For me, I wouldn’t be comfortable letting the upgrade for a production cluster run unattended (i.e. overnight while im sleeping) and naturally your production cluster probably has the most worker nodes. What I’ve found works for me pretty well is just using terraform to spin up a new cluster, deploy to the new cluster, and doing the cutover at the DNS level. food for thought

2
Erik Osterman

I think an elegant approach is to spin up an additional node pool

2
Erik Osterman

Then cordon and drain the nodes in the old one

2

2019-06-06

Alex Co

nvm, it’s because i did not declare .Values.app.secretName as global variable

nutellinoit

I encountered an issue with eks ebs volume provisioning, with small worker groups (less than 3) the pv was created before the pod and in the wrong AZ.

nutellinoit
10:20:35 AM

is settting volumeBindingMode: WaitForFirstConsumer enough on v1.12 to fix this problem?

Pablo Costa

Yes @nutellinoit It works. But I would also suggest to set an affinity policy for one AZ only, to ensure in case of pod restart or eviction, the pod be scheduled on the same AZ of the PVC

1

2019-06-05

what are you guy’s strategy for memory requests? for example looking at my historical data, my api pods use about 700Mi memory on average. I believe it’s better to set that memory request down to around that number, which will allow for more excess memory in the pool. I have it currently overallocated (1000Mi per api pod) and it adds up how much memory is being reserved but unusable by others that may need it.

Erik Osterman

Other considerations to take into account is (a) how much memory volatility there is… perhaps 30% variance is a bit high (b) disruptions - how bad is it if the service is evicted to another node?

Erik Osterman

I would suspect the more pods of a given service you run, the more insulated you are from disruptions of pod evictions

Erik Osterman

which means you can get by with a a 5-10% limit. make sure you monitor pod restarts.

Erik Osterman

so long as that number stays at or near 0, you’re good.

sarkis

how are you all connecting kubectl into the k8s cluster these days?

Erik Osterman

via teleport

Erik Osterman

teleport supports both ssh and kubectl

Erik Osterman

SAML authentication

Erik Osterman

what they call proxy is ~ a bastion, for a centralized entry point

Erik Osterman
Modern Privileged Access Management | Teleport | Gravitational

Make it easy for users to securely access infrastructure, while meeting the toughest compliance requirements.

sarkis

Interesting ty

thanks @Erik Osterman i think i can get away with closer to 5-10%. dont have that much memory volatility looking at my metrics

Alex Co

hi

Alex Co

Alex Co, [Jun 6, 2019 at 143 PM]: i’m having an issue while looping the helm template

env: {{- range .Values.app.configParams }} - name: {{ . | title }} valueFrom: secretKeyRef: name: “{{ .Values.app.secretName }}” key: {{ . | title }} {{- end }}

this is my code in the template to generate the environment var from the values.yaml

but when i ran the helm lint, it complaints like this

executing “uiza-api-v4/templates/deployment.yaml” at <.Values.app.secretName>: can’t evaluate field Values in type interface {}

i guess that helm template does not allow me to put the secretName value inside a loop

is there anyway to solve this ?

2019-06-04

Igor Rodionov

Intresting tool that checks K8s best practices https://github.com/reactiveops/polaris

reactiveops/polaris

Validation of best practices in your Kubernetes clusters - reactiveops/polaris

1
hlesta

Thanks

reactiveops/polaris

Validation of best practices in your Kubernetes clusters - reactiveops/polaris

sarkis

It’s pretty nice .. for now it points out things like if you have set resource limits and it’s pretty basic, but I think this can be useful the more they add to it.

1

2019-06-03

    keyboard_arrow_up