#kubernetes (2019-03)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2019-03-02

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
OperatorHub.io | The registry for Kubernetes Operators

The registry for Kubernetes Operators

2019-03-04

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
rancher/terraform-operator

Use K8s to Run Terraform. Contribute to rancher/terraform-operator development by creating an account on GitHub.

2019-03-05

casey avatar

anyone have an opinion on kops vs eks? Do you think $144 per stage a month is worth it for eks?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I guess it just depends on what the aggregate spend will be

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If a company is spending 30k/mo on amazon, then the 144/mo is a nominal amount :-)

casey avatar

@Erik Osterman (Cloud Posse) Do you have any benefits/drawbacks of using kops or eks in your experience?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

An automated drain/cordon strategy of rolling updates for EKS is lacking. As such, we have preferred working with kops. Until such a strategy exists, EKS+terraform we prefer to stick with kops

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Even eksctl does not support rolling updates yet

daveyu avatar

running into this error with external dns and unlimited staging environments. are there any strategies for dealing with this besides being more restrictive with naming? in this case I’m using the git branch name.

"InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with '14bd77e7-db57-4000-8250-d89cd97cef51-staging-db-myapplication-backend']\n\tstatus code: 400, request id: c7a58d7e-3fa5-11e9-9ea2-799474e23419"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, yea need normalization

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What works well is to use slashes as a delimiter in your branch names

daveyu avatar

in this case, the branch is actually staging-db and the repo is myapplication${{CF_BRANCH}}-${{CF_REPO_NAME}}-backend.${{BASE_HOST}}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ohhhhhhhhhh

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, in that case you have too much information to pack into the hostname

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

need zones

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(dots)

daveyu avatar

ah.. so this would work? [14BD77E7-DB57-4000-8250-D89CD97CEF51-staging-db.myapplication-backend.us-west-2.staging.lootcrate.cc?](http://14BD77E7-DB57-4000-8250-D89CD97CEF51-staging-db.myapplication-backend.us-west-2.staging.lootcrate.cc?)

daveyu avatar

thanks for the tip

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

something like that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you’ll need to update your ACM certs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

to support the wildcard for *.[myapplication-backend.us-west-2.staging.lootcrate.cc](http://myapplication-backend.us-west-2.staging.lootcrate.cc)

daveyu avatar

right. still getting by with LE certs for now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then no reason not to dot-it-up

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. epic123/my-fix

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Use the first field for the service disambiguation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This also allows you to have multiple PRs deploy to the same namespace

2019-03-06

casey avatar

has anyone every tried out openshift origin? Any opinions on it?

Erik Weber avatar
Erik Weber

helped a client once, i wouldn’t do it again but i don’t know how it compares to other managed k8s solutions

Erik Weber avatar
Erik Weber

first impression is that it’s hard to get the initial configuration right if you’re not familiar with it, think we ran the installation playbook like 10 times before it worked successfully

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I didn’t like that it requires modification of the host OS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I wish openshift operated entirely inside of k8s

Erik Weber avatar
Erik Weber

the actual installation is handled by ansible and pretty straight forward

Erik Weber avatar
Erik Weber

when shit hits the fan you have 3 layers to troubleshoot - plain docker, kubernetes and openshift

1
ramesh.mimit avatar
ramesh.mimit

is there any way to print all the kubernetes resource objects?

johncblandii avatar
johncblandii
Listing all resources in a namespace

I would like to see all resources in a namespace. Doing kubectl get all will, despite of the name, not list things like services and ingresses. If I know the the type I can explicitly ask for that

johncblandii avatar
johncblandii

So, anyone running Jenkins get docker images to build without using docker.build commands from the plugin?

We need to run TF commands to get state, pass in build args, etc and our Makefile handles this. I’d rather not recreate that in a Jenkinsfile. Running 1 simple: make release ... is much easier.

Thoughts?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

why not have terraform write to parameter store instead

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then use chamber to read those settings

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this will also let you downscope access to those parameters and not need to expose terraform state to jenkins

johncblandii avatar
johncblandii

that’s the goal. we’re in the conversion of projects and they rely on env vars right now. we’re iterating to get to that place, though.

2019-03-07

btai avatar

does anyone here have a pattern to preload docker images into their cluster before deploys

btai avatar

scenario where you have a weekly cadence (i.e. release every tuesday) and you could preload all the worker nodes with the images the monday night before?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

How large is the cluster? Is this something exasperated by the size of your cluster when pulling the images?

2019-03-08

btai avatar

@Erik Osterman (Cloud Posse) we can assume ~50 worker nodes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Did you see the Docker registry released by Uber this week that uses a p2p/torrent model for image distribution to address the problem of slow pulls?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Introducing Kraken, an Open Source Peer-to-Peer Docker Registryattachment image

Developed by Uber, Kraken is an open source peer-to-peer Docker registry capable of distributing terabytes of data in seconds.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


With a focus on scalability and availability, Kraken was designed for Docker image management, replication, and distribution in a hybrid cloud environment. With pluggable back-end support, Kraken can also be plugged into existing Docker registry setups as the distribution layer.

btai avatar

ah i heard about it, but i didnt realize they had created a more efficient way of distributing images

btai avatar

ill have to check it out. i guess it works as a layer above your docker registry?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes that’s one mode of operation

btai avatar

@Erik Osterman (Cloud Posse) talked to the folks maintaining kraken, sounds like it will help with the problem i have

1
btai avatar

they said they were running into the same issue

btai avatar

theyre also looking for help on the helm chart

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@btai please post back later on how it works out for you

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe something we should add support for in cloudposse/helmfiles

btai avatar

these guys @ uber on the container team have a super fast feedback loop

btai avatar

i asked about the helm chart earlier today for kraken, and they decided to take a first pass at it already

1
btai avatar
Simple helm chart added by apourchet · Pull Request #117 · uber/kraken

End result of helm install looks like this: $ kubectl get pods NAME READY STATUS RESTARTS AGE demo-6568968d44-g4nzw 1/1 Running 0 …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

wow, impressive

i5okie avatar

anyone know anyone for an invite to slack.k8s.io ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You have filled out the form?

btai avatar

they had an issue where bots were spamming channels with NSFW stuff so they had to shut down slack registration

2019-03-09

i5okie avatar

I don’t know any contributors or anyone to mention as sponsor

2019-03-12

maarten avatar
maarten

Hi, is there anyone who used https://www.telepresence.io/ ? I’d like to gather experiences and/or possible alternatives.

Home - Telepresence

Telepresence: a local development environment for a remote Kubernetes cluster

mgrube avatar

I haven’t used telepresence, but I was also researching tools for local dev. This one looks really interesting: https://github.com/windmilleng/tilt

windmilleng/tilt

Local Kubernetes development with no stress. Contribute to windmilleng/tilt development by creating an account on GitHub.

rms1000watt avatar
rms1000watt

Any one uses helm plugins? Curious of your experiences with them. Came across this and it was interesting: https://github.com/totango/helm-ssm

totango/helm-ssm

Retrieves and injects secrets from AWS SSM. Contribute to totango/helm-ssm development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Love the helm-git and helm-s3 plug-ins

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also helm-diff

rms1000watt avatar
rms1000watt

2019-03-14

endofcake avatar
endofcake

Anyone using makisu or kaniko for Docker builds?

endofcake avatar
endofcake
GoogleContainerTools/kaniko

Build Container Images In Kubernetes. Contribute to GoogleContainerTools/kaniko development by creating an account on GitHub.

endofcake avatar
endofcake
uber/makisu

Fast and flexible Docker image building tool, works in unprivileged containerized environments like Mesos and Kubernetes. - uber/makisu

2019-03-18

nutellinoit avatar
nutellinoit

Hello everyone, it is safe to update worker nodes to the latest minor kubernetes ami version? I got EKS master plane on version 1.11.8 and worker nodes on 1.11.5 and 5 autoscaling group attached to the cluster

nutellinoit avatar
nutellinoit

nevermind, i updated ami id, the minor version is still the same, 1.11.5

2019-03-20

btai avatar

any one here familair with alb ingress controlers?

Tim Malone avatar
Tim Malone

a tiny bit. is this in regards to your post in #sig-aws on the kubernetes Slack?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have a TF module to provision roles for alb ingress controller https://github.com/cloudposse/terraform-aws-kops-aws-alb-ingress

cloudposse/terraform-aws-kops-aws-alb-ingress

Terraform module to provision an IAM role for aws-alb-ingress-controller running in a Kops cluster, and attach an IAM policy to the role with permissions to manage Application Load Balancers. - clo…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Igor Rodionov avatar
Igor Rodionov

I do. What is the question?

btai avatar

I followed all the steps from the echoserver tutorial for my app deployment/service/ingress. I can see the alb, target group, and security groups are created. But I’m still getting a 502 and healthcheck failing on the alb

btai avatar

@Igor Rodionov

btai avatar

@Tim Malone yeah it is

nutellinoit avatar
nutellinoit

Is the alb permitted on the workers security group?

Igor Rodionov avatar
Igor Rodionov

Did you make the service ( that is backend for ingress) type of NodeIP ?

btai avatar

@nutellinoit yeah the alb ingress controller added the security group to the workers. @Igor Rodionov I’m using NodePort

Igor Rodionov avatar
Igor Rodionov

I do not know based on provided information

btai avatar

@Igor Rodionov i can provide my configs

btai avatar
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: galaxy-ingress
  namespace: galaxy
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/tags: Environment=dev
    alb.ingress.kubernetes.io/healthcheck-path: /health/
spec:
  rules:
    - host: blah.example.com
      http:
        paths:
          - path: /*
            backend:
              serviceName: galaxy
              servicePort: 80
btai avatar
kind: Service
apiVersion: v1
metadata:
  name: galaxy
  namespace: galaxy
  labels:
    service: galaxy
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8000
  type: NodePort
  selector:
    app: galaxy
btai avatar
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: galaxy
  namespace: galaxy
spec:
  selector:
    matchLabels:
      app: galaxy
  replicas: {{.Values.galaxyReplicaCount | default 3}}
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 50%
  template:
    metadata:
      labels:
        app: galaxy
  spec:
    containers:
    - name: galaxy
      ports:
      - containerPort:8000
...
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@tamsky @Igor Rodionov

tamsky avatar
tamsky
02:48:34 AM

@tamsky has joined the channel

2019-03-21

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Kubernetes Authorization via Open Policy Agentattachment image

In a best-practice Kubernetes cluster every request to the Kubernetes APIServer is authenticated and authorized. Authorization is usually…

cool-doge1

2019-03-22

Abhsihek Jaisingh avatar
Abhsihek Jaisingh

Hey there guys! Are you still using helmfile? Had some questions regarding the same

Abhsihek Jaisingh avatar
Abhsihek Jaisingh

Sadly, couldn’t find any official community / channel for helmfile elsewhere

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes! We use it extensively

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you stumbled across our helmfiles? We update the weekly

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

mmuehlberger avatar
mmuehlberger

Here’s one in this Slack: #helmfile

Abhsihek Jaisingh avatar
Abhsihek Jaisingh

thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Welcome @Abhsihek Jaisingh ! You’ve come to the right place as @mmuehlberger points out. Looks like @mumoshu is going to update the README for Helmfile so it will be easier for others to find.

2019-03-24

Igor Rodionov avatar
Igor Rodionov

Check this up @Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s awesome!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

2019-03-25

btai avatar

can we have an internet facing ALB that routes traffic to k8s worker nodes in private subnets?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s what we are doing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

1
btai avatar

man i am having lots of trouble getting alb ingress controller to work on my cluster

btai avatar

i know you guys are using kops though (not eks)

btai avatar
Cluster VPC Considerations - Amazon EKS

When you create an Amazon EKS cluster, you specify the Amazon VPC subnets for your cluster to use. Amazon EKS requires subnets in at least two Availability Zones. We recommend a network architecture that uses private subnets for your worker nodes and public subnets for Kubernetes to create internet-facing load balancers within. When you create your cluster, specify all of the subnets that will host resources for your cluster (such as worker nodes and load balancers).

btai avatar
Note

Internet-facing load balancers require a public subnet in your cluster. Worker nodes also require outbound internet access to the Amazon EKS APIs for cluster introspection and node registration at launch time. 
Tim Malone avatar
Tim Malone

how did you set up your VPC? with the cloudformation template referenced in the EKS getting started docs?

Tim Malone avatar
Tim Malone

(although i think that only creates public subnets… so might need some additional set up for private subnets for workers, but ALB ingress should still work once you’ve done that…)

btai avatar

with terraform

Tim Malone avatar
Tim Malone

and it’s got the relevant config mentioned in the eks getting started docs? (subnet tags and such)

btai avatar

yeah it does

btai avatar

i wrote a blurb in #sig-network I can copy/paste here

btai avatar

I’m trying to debug why I cannot access my service on my cluster. I went through these debugging steps and I am stuck here where I cannot curl my service by ip: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-work-by-ip

The service is a NodePort service:

NAME     TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
galaxy   NodePort   172.20.38.130   <none>        80:32697/TCP   3d

And when I try to curl the service IP from the k8s worker node, I get this:

node$ curl -v 172.20.38.130:80
* Rebuilt URL to: 172.20.38.130:80/
*   Trying 172.20.38.130...
* TCP_NODELAY set
* connect to 172.20.38.130 port 80 failed: Connection refused
* Failed to connect to 172.20.38.130 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 172.20.38.130 port 80: Connection refused

nslookup from the pod:

pod$ nslookup galaxy.galaxy.svc.cluster.local
Server:        172.20.0.10
Address:    172.20.0.10#53

Name:    galaxy.galaxy.svc.cluster.local
Address: 172.20.38.130

I have this error message that I think is part of the problem in kube-proxy, but I cant find any remediation steps for it:

I0326 00:11:09.536462       7 healthcheck.go:235] Not saving endpoints for unknown healthcheck "galaxy/galaxy"

this returns nothing:

node$ sudo ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
btai avatar

@Tim Malone

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
[cert-manager] [kiam] Provision kiam's TLS certs via cert-manager 0.7.0 by Nuru · Pull Request #107 · cloudposse/helmfiles

what Update cert-manager chart to install version 0.7.0 Update kiam to version 3.0 (helm chart version 2.1.0) and use cert-manager to automatically provision its TLS certificates why Current ver…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Max Moon @daveyu

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

shout out to @Jeremy G (Cloud Posse) for whipping this up

2019-03-26

deftunix avatar
deftunix

Hi all, is anyone of you using https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/2.3.0 to provision EKS cluster? I am experiencing an issue in the service type LoadBalancer. the cluster is not creating the balancer

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Let’s move to #terraform-aws-modules . Ping @antonbabenko , he is the creator of the module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
‘AWS vs K8s’ is the new ‘Windows vs Linux’

Then…

If, like me, you’re over 40 and work in IT, you’ll probably remember a time when everyone used Windows, and a small but growing proportion of people were wasting their lives compiling Linux in their spare time.

The Windows users would look on, baffled: ‘Why would you do that, when Windows has everything you need, is supported, and is so easy to use?!’

Answers to this question varied. Some liked to tinker, some wanted an OS to be ‘free’, some wanted more control over their software, some wanted a faster system, but all had some niche reason to justify the effort.

Now…

As I stayed up for another late night trying to get some new Kubernetes add-on to work as documented, it struck me that I’m in a similar place to those days. Until a couple of years ago, Kubernetes itself was a messy horror-show for the uninitiated, with regularly-changing APIs, poor documentation if you tried to build yourself, and all the characteristics you might expect of an immature large-scale software project.

That said, Kubernetes’ governance was and is far and away ahead of most open source software projects, but the feeling then was similar to compiling Linux at the turn of the century, or dealing with your laptop crashing 50% of the time you unplugged a USB cable (yes, kids, this used to happen).

It’s not like confusion and rate of change has come down to a low level. Even those motivated to keep up struggle with the rate of change in the ecosystem, and new well-funded technologies pop up every few months that are hard to explain to others.

Take knative for example:

The first rule of the knative club is you cannot explain what knative is— Ivan Pedrazas (@ipedrazas) March 18, 2019 So my AWS-using comrades see me breaking sweat on the regular and ask ‘why would you do that, when AWS has everything you need, is supported and used by everyone, and is so easy to use!?’

AWS is Windows

Like Windows, AWS is a product. It’s not flexible, its behaviour is reliable. The APIs are well defined, the KPIs are good enough to be useful for most ‘real’ workloads. There are limits on all sorts of resources that help define what you can and can’t achieve.

Most people want this, like most people want a car that runs and doesn’t need to be fixed often. Some people like to maintain cars. Some companies retain mechanics to maintain a fleet of cars, because it’s cheaper at scale. In the same way, some orgs get to the point where they could see benefits from building their own data centres again. Think Facebook, or for a full switcher, Dropbox. (We’ll get back to this).

Like Microsoft, (and now Google) AWS embraces and extends, throwing more and more products out there as soon as they become perceived as profitable.

AWS and Kubernetes

Which brings us to AWS’s relationship with Kubernetes. It’s no secret that AWS doesn’t see the point of it. They already have ECS, which is an ugly hulking brute of a product that makes perfect sense if you are heavily bought into AWS in the first place.

But there’s EKS, I hear you say. Yes, there is. I haven’t looked at it lately, but it took a long time to come, and when it did come it was not exactly feature rich. It felt like one cloud framework (AWS) had mated with another (K8s) and a difficult adolescent dropped out. Complaints continue of deployment ‘taking too long’, for example.

Finally taking AWS’s EKS for a spin. While I’m bias for sure, this is not what I expect from a managed Kubernetes offering. It’s been 10 minutes and I’m still waiting for the control plane to come up before I can create nodes through a separate workflow. Kelsey Hightower (@kelseyhightower) January 30, 2019 Like Microsoft and Linux, AWS ignored Kubernetes for as long as it could, and like Microsoft, AWS has been forced to ’embrace and extend’ its rival to protect its market share. I’ve been in meetings with AWS folk who express mystification at why we’d want to use EKS when ECS is available.

EKS and Lock-in

Which brings us to one of the big reasons AWS was able to deliver EKS, thereby ’embracing’ Kubernetes: IAM.

EKS (like all AWS services) is heavily integrated with AWS IAM. As most people know, IAM is the true source of AWS lock-in (and Lambda is the lock-in technology par excellence. You can’t move a server if there are none you can see).

Shifting your identity management is pretty much the last thing any organisation wants to do. Asking your CTO to argue for a fundamental change to a core security system with less than zero benefit to the business in the near term and lots of risk is not a career-enhancing move.

On the other hand, similar arguments were put forward for why Linux would never threaten Windows, and while that’s true on the desktop, the advent of the phone and the Mac has reduced Windows to a secondary player in the consumer computing market. Just look at their failure to force their browsers onto people in the last 10 years.

So it only takes a few unexpected turns in the market for something else to gain momentum and knife the king of the hill. Microsoft know this, and AWS know this. It’s why Microsoft and AWS kept adding new products and features to their offering, and it’s why EKS had to come.

Microsoft eventually turned their oil tanker towards the cloud, going big on open source, and Linux and Docker, and all the things that would drag IT to their services. Oh, and you can use the same AD as your corporate network, and shift your Microsoft Windows licenses to the cloud. And the first one’s free. Microsoft don’t care about the OS anymore. Nobody does, not even RedHat, a business built around supporting a rival OS to Windows. The OS is dead, a commodity providing less and less surplus value.

Will Kubernetes force AWS to move their oil tanker towards Kubernetes? Can we expect to see them embrace Istio and Knative and whichever frameworks come after fully into their offering? (I don’t count how–to guides in their blogs).

AWS’ Competition and Cost

I don’t know. But here’s some more reasons why it might.

Like Microsoft in the heyday of Windows OS, AWS has only one competitor: the private data centre. And like Microsoft’s competitor then (Linux), adoption of that competitor is painful, expensive and risky to adopt.

But what is the OS of that data centre? Before Kubernetes the answer would have been OpenStack. OpenStack is widely regarded as a failure, but in my experience it’s alive (if not kicking) in larger organisations. I’m not an OpenStack expert, but as far as I can tell, it couldn’t cover all the ground required to become a stable product across all the infra it needed to run on and be a commodity product. Again, this is something Microsoft ruled at back in the day: you could run it on ‘any’ PC and ‘any’ hardware and it would ‘just work’. Apple fought this by limiting and controlling the hardware (and making a tidy profit in the process). Linux had such community support that it eventually covered the ground it needed to to be useful enough for its use case.

OpenStack hasn’t got there, and tried to do too much, but it’s embedded enough that it has become the default base of a Kubernetes installation for those organisations that don’t want to tie into a cloud provider.

Interestingly, the reasons AWS put forward for why private clouds fail will be just as true for themselves: enterprises can’t manage elastic demand properly, whether it’s in their own data centre or when they’re paying someone else. Command and control financial governance structures just aren’t changing overnight to suit an agile p…

2

2019-03-27

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Boosting your kubectl productivity

If you work with Kubernetes, then kubectl is probably one of your most-used tools. This article contains a series of tips and tricks to make your usage of kubectl more efficient.

1
marcel avatar

hello guys

marcel avatar

somebody already have a problem with metrics using eks ?

marcel avatar
marcel avatar

the problem is in the hpa addon. from kubernetes-incubator

marcel avatar
AWS EKS: cluster doesn't provide client-ca-file · Issue #119 · DirectXMan12/k8s-prometheus-adapter

Hi, I am trying to deploy on AWS EKS which now supports Kubernetes HPA AWS Blog. I successfulyy deployed Prometheus via helm but now when i try to deploy the adapter i get the following error `kube…

2019-03-29

casey avatar

anyone have suggestions on a good rate-limiting gateway for kubernetes? I have been looking at kong, ambassador, and express-gateway, not sure which one to go with or if there is a better choice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@casey istio provides rate limiting (envoy)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Enabling Rate Limits

This task shows you how to use Istio to dynamically limit the traffic to a service.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

plus with istio you get so much more than just that

2019-03-30

jtbray avatar

@Erik Osterman (Cloud Posse) I was watching one of Kelsey HT’s keynote speeches, when he whips out his phone & starts using Google Assistant to manage some network services (via Istio)!! I’d love to see a well formed write-up discussing similar implementations &/or pitfalls/concerns.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Kelsey is a master of live demos

jtbray avatar

(has me very excited to control everything by voice)!!

    keyboard_arrow_up