#kubernetes (2019-01)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2019-01-08

warrenvw avatar
warrenvw

Hello. I’m curious if anyone has had performance issues running kubectl against an EKS cluster? kubectl get po takes 5 seconds to complete. FWIW, when I used kops to create the cluster, kubectl get po would return quickly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrmmmmm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

same size nodes, and same number of pods?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(roughly)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…are you using IAM authenticator with both?

warrenvw avatar
warrenvw

actually, worker nodes are bigger.

warrenvw avatar
warrenvw

let me confirm IAM authenticator

warrenvw avatar
warrenvw

yep. uses aws-iam-authenticator.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so kops uses aws-iam-authenticator as well…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrm…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) have you noticed this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(btw, are you using our terraform modules for EKS?)

warrenvw avatar
warrenvw

sorry, no, at least not yet.

warrenvw avatar
warrenvw

i wanted to find out if this is an EKS thing in general.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

When I was testing EKS, I didn’t notice any delay

warrenvw avatar
warrenvw

okay, that’s a good data point. thanks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Andriy Knysh (Cloud Posse) wrote all of our EKS terraform modules)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Maybe the Authenticator is slow to connect to AWS

warrenvw avatar
warrenvw

i’ll investigate that. thanks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Also, how do you access the kubeconfig file?

warrenvw avatar
warrenvw

default ~/.kube/config

warrenvw avatar
warrenvw

something must not be configured properly. i’m investigating. i’ll let you know what i discover.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sometimes using strace helps me figure out what the process is doing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

enough to dig deeper

2019-01-09

webb avatar
webb
06:41:40 AM

@webb has joined the channel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Effectively Managing Kubernetes with Cost Monitoringattachment image

This is the first in a series of posts for managing Kubernetes costs. Article shows how to quickly setup monitoring for basic cost metrics.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I saw a demo of this yesterday and am super impressed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve invited @webb to #kubecost, so if you have any questions ping him.

webb avatar

Thanks for the kind words, @Erik Osterman (Cloud Posse)! We’re ready & available to help with tuning kube infrastructure!

2019-01-10

Igor Rodionov avatar
Igor Rodionov

@Erik Osterman (Cloud Posse) Check this out. New Year it the time to imagine it. https://blog.giantswarm.io/the-state-of-kubernetes-2019/

The State of Kubernetes 2019attachment image

Last year I wrote a post entitled A Trip From the Past to the Future of Kubernetes. In it, I talked about the KVM and AWS versions of our stack and the imminent availability of our Azure release. I also…

1
sarkis avatar

Just got access to the GKE Serverless Add-on beta: https://cloud.google.com/knative/

Knative  |  Google Cloud

Knative is a Google-sponsored industry-wide project to establish the best building blocks for creating modern, Kubernetes-native cloud-based software

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

wow thats slick

sarkis avatar

i’m going to give it a spin… looks interesting!

sarkis avatar

feels a little bit like fargate is to ECS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, that’s how interpreted it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hah, we’ve all heard of dind (docker in docker)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
kubernetes-sigs/kind

Kubernetes IN Docker - local clusters for testing Kubernetes - kubernetes-sigs/kind

fb-wow2
party_parrot2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this is pretty cool. We could leverage this for testing with geodesic.

kubernetes-sigs/kind

Kubernetes IN Docker - local clusters for testing Kubernetes - kubernetes-sigs/kind

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add Better Support for Minikube · Issue #204 · cloudposse/geodesic

what Add support for Docker for Mac (DFM) Kubernetes or Minikube why Faster LDE, protyping Testing Helm Charts, Helmfiles howto I got it working very easily. Here's what I did (manually): Enabl…

2019-01-15

frednotet avatar
frednotet

hi everyone ! I’m struggling to implement a CI/CD with Gitlab… I do have several different k8s cluster (one per stage “test”, “dev”, “stg” and “prd”) on different aws accounts (one per stage as before). I cannot find help on 2 things: how to target a specific cluster depending the branch ? and since we’re working with micro-services: how to keep a running version of my deployments on each cluster with a generic name not depending the branches names; but allowing an auto-deploy with uniques names in only one stage ? Could someone help me or link me to a good read/video about it ? right now, I just have my fresh new cluster; I still have to install/config everything (using helm).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hahaha there’s your problem.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I’m struggling to implement a CI/CD with Gitlab…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We highly recommend #codefresh.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Codefresh vs. GitlabCI - Which one should you use

Gitlab is one of the supported GIT providers in Codefresh. In this article, we will look at the advantages of Codefresh compared to the GitlabCI platform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Codefresh makes it trivial to select the cluster.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve used different strategies.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. for release tags

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

1.2.3-prod or 1.2.3-staging

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for branches, I suggest using a convention.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

a branch called staging/fix-widgets would go to the staging cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


how to keep a running version of my deployments on each cluster with a generic name not depending the branches names;

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oops. Missed that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So, the meta data needs to come from somewhere.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It can be ENVs in the pipeline configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It can be branch or tag names. Note, you can use tags for non-production releases.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It can be manual when you trigger the deployments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@frednotet

vitaly.markov avatar
vitaly.markov

cause Codefresh designed for using within Kubernetes, when Gitlab more general purpose

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, exactly..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

built from the ground up with support for docker, compose, swam, kubernetes, and helm.

frednotet avatar
frednotet

Thanks I’m reading

frednotet avatar
frednotet

(I just achieved my integration of gitlab but indeed I still have this multiple cluster that requires me to take the gitlab EE)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
11:44:23 PM

2019-01-17

Ajay Tripathy avatar
Ajay Tripathy
04:28:14 PM

@Ajay Tripathy has joined the channel

2019-01-18

btai avatar

anyone have authentication problems using metrics-server with a kops cluster? Also wondering if anyones run into heapster continuously in a CrashLoopBackOff because of OOMKilled

btai avatar

I’ve tried increasing the mem limit on the heapster pod but it doesn’t seem to increase

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have seen that. I recall not being able to figure it out. We don’t have it happening any more. This was also on an older 1.9 kops cluster.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it was driving me mad

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no matter how much memory I gave it, it had no effect

btai avatar

@Erik Osterman (Cloud Posse) how’d you fix it? im on 1.11.6 kops

btai avatar

also have you switched to metrics-server

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

all our configurations are here:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i never ended up fixing it on that cluster. it was a throw away.

btai avatar

you use prom insteaD?

Igor Rodionov avatar
Igor Rodionov

not yet. using legacy - heapster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we do

btai avatar

oh wait

btai avatar

and heapster

btai avatar

you dont use heapster-nanny?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i don’t know the details

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov would probably

btai avatar

the OOMKilled is also driving me mad

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, sorry man!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i literally spent days on it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and didn’t figure it out

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Daren I forgot who is doing your prometheus stuff

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I was having this problem on one of your clusters.

btai avatar

do you guys know why when I try to edit a deployment with kubectl edit the changes I make don’t stick?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

usually it will emit an error when you exit kubectl edit

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if it doesn’t check $?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

kubectl edit ....; echo $?

btai avatar

weird

btai avatar

even if I increase the heapster deployment resource memory limit, it keeps dropping back down to 284Mi

btai avatar

no error btw @Erik Osterman (Cloud Posse)

$ k edit deployment heapster -n kube-system
deployment.extensions/heapster edited
Daren avatar

@Erik Osterman (Cloud Posse) @btai we did have the heapster issue. I believe it was traced to having too many old pods for it to handle

Daren avatar

It tried to load the state of every pod include dead ones

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

OH!! That makes sense

btai avatar

too many pods?

btai avatar

we do have alot of pods

btai avatar

were u able to fix it daren via configuration?

Daren avatar

We switched to kube-state-metrics

btai avatar

so heapster just flat out stopped working for you guys

Daren avatar

I believe we increased its memory limit to 4GB for a while then had to ditch it

btai avatar

so I’m unable to increase the mem limit for some reason. ill update the deployment spec resource limit for memory to 1000Mi and it will continue to stay at 284Mi

btai avatar

ever run into that?

btai avatar

I have ~5000 pods currently in this cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I had that issue. there’s also some pod auto resizer component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think that was fighting with me

btai avatar

@Erik Osterman (Cloud Posse) you had the issue where you couldnt increase the mem limit?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, i think daren is talking about exited pods

Daren avatar

Yes

2019-01-19

2019-01-22

btai avatar

@Daren since youre using kube-state-metrics, are you unable to use k top anymore

Daren avatar

Honestly, Ive never used it, and it appears it does not work

Daren avatar


# kubectl top pod
Error from server (NotFound): the server could not find the requested resource (get services http)

btai avatar

i see

2019-01-26

Max Moon avatar
Max Moon

https://github.com/stakater/IngressMonitorController pretty cool add-on for k8s to automatically provision health checks in 3rd party apps, these folks make a lot great open source projects, worth checking out

stakater/IngressMonitorController

A Kubernetes controller to watch ingresses and create liveness alerts for your apps/microservices in UptimeRobot, StatusCake, Pingdom, etc. – [✩Star] if you're using it! - stakater/IngressMoni…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, stakater is cool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve been following them too

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
stakater/Forecastle

Forecastle is a control panel which dynamically discovers and provides a launchpad to access applications deployed on Kubernetes – [✩Star] if you’re using it! - stakater/Forecastle

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:33:59 AM
Max Moon avatar
Max Moon

Same here, I’ve been working on a “getting started on kubernetes” blog and was looking for fun new projects to include

Max Moon avatar
Max Moon

I’ve been trying new projects out on a Digital Ocean K8s cluster, it’s multi-master + 2 workers, 100gb storage, and a LB for $30 a month

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s cool

Max Moon avatar
Max Moon

not too shabby for development

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov has been doing that too

Max Moon avatar
Max Moon

It’s honestly a very nice experience, as you know, my setup at work is very smooth already

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, that said, always want to make things smoother

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think the ease-of-use of GKE/digital ocean k8s is what we aspire to

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

while at the same time getting the IaC control

Max Moon avatar
Max Moon

Yeah! It’s really nice to have the model to work off of

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Especially for smaller teams that don’t need all the bells and whistles and ultimate control over every little thing

Max Moon avatar
Max Moon

Agreed. My experience with GKE was so nice and smooth, very much so what I base a lot of our tools off of. Their cloud shell is very similar in function to Geodesic, as you’re probably aware

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I saw that. I haven’t gone deep on it, but it validates the pattern.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, #geodesic is always positioned as a superset of other tools, which means the google cloudshell fits well inside

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But you bring up a good point.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we can improve our messaging by comparing geodesic to the google cloud shell

Max Moon avatar
Max Moon

yeah, at least as an introduction to the idea

Igor Rodionov avatar
Igor Rodionov

@Max Moon @erik means I also use DO for my pet projects

Max Moon avatar
Max Moon

Right! We should chat

2019-01-27

deftunix avatar
deftunix

hi everyone, i am creating a deployment example of nginx on kubernetes using manifest file and I want add prometheus monitoring on it

deftunix avatar
deftunix

do you have some github manifest t share?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using prometheus operator?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use helmfile + prometheus operator to deploy monitoring for nginx-ingress here: https://github.com/cloudposse/helmfiles/blob/master/releases/nginx-ingress.yaml#L156

cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

deftunix avatar
deftunix

@Erik Osterman (Cloud Posse) yes, I am using prometheus operator pattern

1
deftunix avatar
deftunix

@Erik Osterman (Cloud Posse) I see. I will analyse your code. I would like just to add monitoring on top a easy nginx deployment like https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suggest using helm for repeatable deployments rather than raw resources

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(unless this is just a learning exercise)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We install the official nginx-ingress helm chart here: https://github.com/cloudposse/helmfiles/blob/master/releases/nginx-ingress.yaml

cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(helmfile is a declarative way of deploying helm charts)

deftunix avatar
deftunix

@Erik Osterman (Cloud Posse) yes, I am using helm. I was just trying to arrange an example based on manifest

1

2019-01-28

btai avatar

if I want to ssh into my EKS worker node, the default username is ec2user right?

btai avatar

ah its ec2-user

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@btai sorry - @Andriy Knysh (Cloud Posse) is heads down today on another project for a deadline on friday

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have you made some headway?

btai avatar

i figured out the ssh username, i just left my question in case someone else searches for it in the future

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep! that’s great.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re about to release our public slack archives (hopefully EOW)

btai avatar

and once i got into my worker nodes, i was able to debug my issues

cool-doge1
btai avatar
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

btai avatar

I would change those subnet_ids to be private subnets and add a bastion module

btai avatar

i can help you guys do that if you’d like

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, that’s a good suggestion.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’d accept a PR for that (just saying…)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha

2019-01-29

btai avatar

@Erik Osterman (Cloud Posse) i remember u mentioning an aws iam auth provider that we should use for kubernetes

btai avatar

which one was it? kube2iam?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

kiam

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(avoid kube2iam)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we have an example of deploying it in our helmfiles distribution

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@btai are you unblocked? what was the issue with worker nodes not able to access the cluster?

btai avatar

yes @Andriy Knysh (Cloud Posse) it was a stupid mistake, i had created the eks cluster security group but didn’t attach it to the cluster

btai avatar

@Erik Osterman (Cloud Posse) what was the reasoning to avoid kube2iam?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

sec

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

thought we had a write up on it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can’t find it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so kube2iam has a very primitive model. every node runs an a daemon.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

when a pod needs an IAM session, it queries the metadata api which is intercepted by iptables rules and routed to the kube2iam daemon

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that part is fine. that’s how kiam works more or less.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the problem is if you run a lot of pods, kube2iam will DoS AWS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

AWS doesn’t like that and blocks you. so the pod gets rescheduled to another node (or re-re-scheduled) until starts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so we have this cascading problem, where one-by-one each node starts triggering rate limits

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and then it doesn’t back off

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so now we have 5000 pods request IAM credential in an an aggresive manner and basically the whole AWS account is hosed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

kiam has a client / server model

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you run the servers on the masters. they are the only ones that need IAM permissions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the clients request a session from the servers. the servers cache those sessions.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this reduces the number of instances hitting the AWS IAM APIs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and results in (a) faster assumed rules (b) less risk of tripping rate limits

btai avatar

@Erik Osterman (Cloud Posse) awesome that makes sense. thanks for the detailed answer!

deftunix avatar
deftunix

hi everyone, I am deploying a prometheus operator to monitor my application. I probably misunderstood how it works

deftunix avatar
deftunix

basically, for each application or servicemonitor you will have a prometheus instance

deftunix avatar
deftunix

or you can share the cluster one with your application? what is the practice?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@deftunix when you deploy prometheus operator, it will scrape all pods including the app, so you don’t need to anything special about it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will create these resources

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
08:58:41 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

deftunix avatar
deftunix

yes, I have the prometheus operator running and monitoring my base infrastructure

deftunix avatar
deftunix

I deployed it using the coreos helm chart in a monitoring namespace but my application service are not scraped

deftunix avatar
deftunix

it’s scarpping just a set of servicemonitors “seems” predefined

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

does the app output any logs into stdout?

deftunix avatar
deftunix

yes! I deployed an nginx with the exporter. when I created with the operator a servicemonitor and prometheus instance

deftunix avatar
deftunix

dedicated to the app, it works

deftunix avatar
deftunix

the target appear

deftunix avatar
deftunix
coreos/prometheus-operator

Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes - coreos/prometheus-operator

deftunix avatar
deftunix

I am following this

1
deftunix avatar
deftunix

but I was expecting that adding the annotation to the services the scrape was automatic

deftunix avatar
deftunix

and new target will be showed in my target list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you also deploy kube-prometheus?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
coreos/prometheus-operator

Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes - coreos/prometheus-operator

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

deftunix avatar
deftunix

in my “cluster-metrics” prometheus yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so when you install kube-prometheus, it will install a bunch of resources including https://github.com/prometheus/node_exporter

prometheus/node_exporter

Exporter for machine metrics. Contribute to prometheus/node_exporter development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which will scrape metrics

deftunix avatar
deftunix

yes, from node, apiserver, kubelets, kube-statistics

deftunix avatar
deftunix

my problem are not the cluster-metrics, because them are fully supported by default by the helm chart but understand how the operator pattern work

2019-01-30

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Make CrashLoopBackoff timing tuneable, or add mechanism to exempt some exits · Issue #57291 · kubernetes/kubernetes

Is this a BUG REPORT or FEATURE REQUEST?: Feature request /kind feature What happened: As part of a development workflow, I intentionally killed a container in a pod with restartPolicy: Always. The…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would have assumed the threshold off a CrashLoopBackoff be configurable

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am working on a demo where we deliberably kill pods

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so I want to show resiliency. oh well.

btai avatar

have you guys checked out https://github.com/windmilleng/tilt

windmilleng/tilt

Local Kubernetes development with no stress. Contribute to windmilleng/tilt development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have it starred but haven’t gotten deeper than that

windmilleng/tilt

Local Kubernetes development with no stress. Contribute to windmilleng/tilt development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
kubernetes-sigs/kubebuilder

Kubebuilder - SDK for building Kubernetes APIs using CRDs - kubernetes-sigs/kubebuilder

2019-01-31

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
iJanki/kubecron

Utilities to manage kubernetes cronjobs. Run a CronJob manually for test purposes. Suspend/unsuspend a CronJob - iJanki/kubecron

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Daren

    keyboard_arrow_up