#kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2019-09-28

jetstreamin

has anyone used kubeless before?

2019-09-27

2019-09-26

Erik Osterman
A Practical Guide to Setting Kubernetes Requests and Limits

Setting Kubernetes requests and limits effectively has a major impact on application performance, stability, and cost. And yet working with many teams over the past year has shown us that determining the right values for these parameters is hard. For this reason, we have created this short guide and are launching a new product to help teams more accurately set Kubernetes requests and limits for their applications.

Matthew Cascio

Wow. I had just begun research to make a tool that does exactly this. Saves me some time, I guess. Thanks for the post.

A Practical Guide to Setting Kubernetes Requests and Limits

Setting Kubernetes requests and limits effectively has a major impact on application performance, stability, and cost. And yet working with many teams over the past year has shown us that determining the right values for these parameters is hard. For this reason, we have created this short guide and are launching a new product to help teams more accurately set Kubernetes requests and limits for their applications.

Erik Osterman

Do you use helmfile by chance?

Matthew Cascio

Planning on using it more. Why do you ask?

Erik Osterman
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

1
Erik Osterman

here’s how we deploy kubecost

Erik Osterman

Matthew Cascio

Thanks, I’ll take a look

2019-09-24

Erik Osterman
3
ruan.arcega

hi guys,

quick question….

i have 2 configMap, A and B, and i have a bunch of key value in both situations, but, in the middle of this bunch i have some key name duplicated with values differents, when configure the specification using envFrom in kuberentes, i call configMap A and in the sequencia configMap B.

Which value will persistent in my container when the pod going up and running ?

rms1000watt

I’m guessing the last one wins. But this seems straight forward enough to just test and find out. I don’t actually know off the top of my head

ruan.arcega

yeah @rms1000watt you are right, i just took a test now, and the last one wins….

i had this question because in the project that i am working one developers want to use the same environment variable with values different.

i know it was not possible, but never had some tests… i suggested use a prefix when call configMap…


\- prefix: VALUE_
  configMapRef:
    name: configmap

and treat this situation in the code…

rms1000watt

Have you ever heard of anything (in EKS) where you containers just sit idle? Like, we had a 10 minute period in time where the containers didn’t do anything (no logs, no http responses). Datadog said the containers were still running and consuming memory.. but just no application activity. When we looked at the container logs.. it’s like nothing happened.. i mean, there was a 10 minute gap in time between logs, but other than that.. it looked normal

Cameron Boulton

Are logs/responses the only metrics you are reviewing? You said Datadog shows memory consumption but does CPU or memory for that same time period show any changes or is it literally flat?

Cameron Boulton

Do you have request metrics (rate, errors, etc.) for this time period?

rms1000watt

the datadog metrics are flat, but non-zero.. idling around 1-5% in both cases

rms1000watt

request metrics, yea. Like, the ALB gets a ton of 4xx

rms1000watt

since the containers aren’t returning HTTP results

Cameron Boulton

Sounds like they aren’t getting any requests either though

rms1000watt

right

Cameron Boulton

ALB health graph looks like …?

rms1000watt

like, i wouldn’t be surprised if the readiness probe failed.. so the k8s service stopped routing requests.. but then the livlieness probe would have died too and caused a restart.. which we had 0 of

rms1000watt

lemme look at ALB graph

rms1000watt

like.. an order of magnitude more 4xx and 5xx

rms1000watt

but health seemed fine, unless I’m not looking at the right spot

Cameron Boulton

You’ll have to look at Target Groups monitoring specifically for [Un]Healty Hosts metric

Cameron Boulton

And how are you configuring your ALB(s)? Directly? Indirectly with a Kubernetes controller like alb-ingress-controller?

rms1000watt

lol, 8 out of 350 healthy hosts

2
2
2
rms1000watt

alb-ingress-controller

Cameron Boulton

Check that controller’s logs

Cameron Boulton

Maybe an reconciliation loop is stuck from bad config and triggering every few minutes?

rms1000watt

hmmm

rms1000watt

i can look at that

rms1000watt

this only happened once out of like.. a few months of having alb-ingress-controller

rms1000watt

@Cameron Boulton you’re a god

rms1000watt

yea alb-ingress-controller

rms1000watt

11 minute gap

Cameron Boulton

rms1000watt

will you marry me?

2
rms1000watt

this is amazing work

Cameron Boulton

Yea I think alb-ingress-controller reconsil loop is 10 mins +/- some seconds (imprecise scheduler)

Cameron Boulton

I’ll send my consultation bill to Calm, Attention: Ryan Smith

rms1000watt

so whyy in the heck has this never happened before

Cameron Boulton

rms1000watt

hahahaha nice

rms1000watt

“payment: 1 big hug”

2
Cameron Boulton

Ha

Cameron Boulton

Well, depending on what the logs are showing you that it can’t reconcile; maybe this is the first time someone pushed a config change that it couldn’t handle.

rms1000watt

what’s lame is the the deployments only change the k8s deployment image

rms1000watt

which trigger a deploy

rms1000watt

so like.. uhh.. there shouldn’t really have been anything gnarly that killed it

Cameron Boulton

Maybe you’ve hit a bug/are on an older version of alb-ingress-controller?

rms1000watt

ah.. guessing older version

rms1000watt

[docker.io/amazon/aws-alb-ingress-controller:v1.1.2](http://docker.io/amazon/aws-alb-ingress-controller:v1.1.2)

rms1000watt

yeah, a patch behind

Cameron Boulton

I’m skeptical that’s it then

Cameron Boulton

What are the logs telling you that the controller is doing/failing?

rms1000watt

only info level logs.. but they look like..

I0924 18:55:33.158007       1 targets.go:95] service-a: Removing targets from arn:aws:elasticloadbalancing

to remove a big chunk of them.. then

I0924 19:06:30.101251       1 targets.go:80] service-a: Adding targets to arn:aws:elasticloadbalancing

adding the big chunk (11min later)

rms1000watt

and it was during this timeframe all the activity ceased

Cameron Boulton

Right (as you would expect)

1
Cameron Boulton

How recently was that version deployed?

rms1000watt

the pod alb-ingress-controller is 43 days old

Cameron Boulton

Any chance you’ve been experiencing this issue for 43 days or is it newer?

rms1000watt

brand spanking new

rms1000watt

caused an outage.. alerted everyone

rms1000watt

would have known if it happened before

Cameron Boulton

Okay

Cameron Boulton

How old is/are the Ingress(es) that are annotated for this controller?

rms1000watt

the ing is 154d old

Cameron Boulton

Actually, that’s creation time not last modified so nevermind

rms1000watt

lol whoops

Cameron Boulton

If you describe your Ingress(es) do you see anything under the events?

rms1000watt

<none> events

Cameron Boulton

Hmm

rms1000watt

stupid question.. should probably just google it.. but do you have a replica > 1 when you run alb-ingress-controller

Cameron Boulton

Maybe keep going back in your Ingress Controller logs until you find something else or the start of the “Removing/Adding targets” loop?

Cameron Boulton

Did you launch the alb-ingress-controller into a new namespace/cluster recently?

rms1000watt

Theres just some detaching and attaching of SGs to ENIs

I0924 18:55:35.660049       1 instance_attachment.go:120] service-a: detaching securityGroup sg-redacted from ENI eni-redacted
rms1000watt

hmmm, yeah, but different cluster and different sub account

rms1000watt

this one has been stable for a few months

Cameron Boulton

The new cluster/sub-account recent?

rms1000watt

but this is the 11 min gap.. and the logs

rms1000watt
I0924 18:55:59.734800       1 instance_attachment.go:120] service-a: detaching securityGroup sg-readacted from ENI eni-redated
I0924 19:06:30.101251       1 targets.go:80] prod/app-api: Adding targets to arn:aws:elasticloadbalancing
rms1000watt

nothing in between

rms1000watt

unless EKS had a fart or something for a little

Cameron Boulton

I0924 19:06:30.101251 is the beginning/first instance of “Adding targets” loop?

rms1000watt

new cluster/sub account. yeah, but not today.. like a week ago

rms1000watt
I0924 18:55:33.158007       1 targets.go:95] service-a: Removing targets 
I0924 18:55:58.344701       1 targets.go:95] service-a: Removing targets 
I0924 19:06:30.101251       1 targets.go:80] service-a: Adding targets 
Cameron Boulton

What are the “args” key for the only container in the alb-ingress-controller pod (if any)?

rms1000watt
      - args:
        - --cluster-name=k8s
        - --ingress-class=alb
Cameron Boulton

Okay, so if you have this controller running anywhere else on this cluster or any other cluster in the same account and that one is also using --cluster-name=k8s the controllers are going to fight over ALBs/Target Groups.

Cameron Boulton

Any chance that’s possible?

rms1000watt

lemme iterate regions.. but I’m fairly certain 1 cluster in this account

2
rms1000watt

yeah.. 1 cluster in the account

rms1000watt
incubator/aws-alb-ingress-controller
rms1000watt

helm deployed

Cameron Boulton

And no possibility of it in another namespace or something?

rms1000watt
➜  ~ kubectl get pods --all-namespaces | grep alb
default       alb-aws-alb-ingress-controller-6b9cfd997f-b99zz                   1/1     Running     1          43d
rms1000watt

afk for a smidge.. father duties

rms1000watt

i really appreciate all your help on this btw. you’re a rare breed and it’s incredibly invaluable

Cameron Boulton

Sure thing. I think that’s all I can spare today though. The behavior you describe sure feels like an solation failure/reconciliation competition.

1

2019-09-18

Alex Siegman
Deploying to Kubernetes with Helm and GitHub Actions

This tutorial will go through the basics of GitHub actions as well as deploying to Kubernetes using a pre-built Helm action

1
Erik Osterman
Kubernetes 1.16: Custom Resources, Overhauled Metrics, and Volume Extensions

Authors: Kubernetes 1.16 Release Team We’re pleased to announce the delivery of Kubernetes 1.16, our third release of 2019! Kubernetes 1.16 consists of 31 enhancements: 8 enhancements moving to stable, 8 enhancements in beta, and 15 enhancements in alpha. Major Themes Custom resources CRDs are in widespread use as a Kubernetes extensibility mechanism and have been available in beta since the 1.7 release. The 1.16 release marks the graduation of CRDs to general availability (GA).

Erik Osterman
getsentry/sentry-kubernetes

Kubernetes event reporter for Sentry. Contribute to getsentry/sentry-kubernetes development by creating an account on GitHub.

Erik Osterman

2019-09-16

cabrinha

Anyone use https://github.com/kubernetes-sigs/aws-efs-csi-driver yet? I’m having some trouble getting the volume mounted.

kubernetes-sigs/aws-efs-csi-driver

CSI Driver for Amazon EFS https://aws.amazon.com/efs/ - kubernetes-sigs/aws-efs-csi-driver

2019-09-12

joshmyers

Anyone done multi region EKS?

Nikola Velkovski

Not but I was doing some research

Nikola Velkovski

To me it looks like it’s possible with a Latency record combined with ExternalDNS

Nikola Velkovski

but that’s possible if this PR is merged

Nikola Velkovski
Add support for latency-based routing on AWS · Issue #571 · kubernetes-incubator/external-dns

Route53 on AWS supports &quot;latency-based routing&quot; for DNS records. You can have multiple DNS records for the same hostname, having different ALIAS to regional ELBs. This is usually the pref…

Nikola Velkovski

I was doing a blog post about it

Nikola Velkovski

and decided to cut it short at the DNS/app level.

2019-09-11

2019-09-10

johncblandii

I’m not learned in the area of k8s scheduling so this is destroying my day. LOL. Anyone have any helpful insights?
Warning FailedScheduling 50s (x9 over 8m7s) default-scheduler 0/1 nodes are available: 1 node(s) had volume node affinity conflict.

johncblandii

This is a 1-node cluster on EKS

Alex Siegman

the scheduler is trying to tell you it has no nodes to work on, I believe. I’m no expert here either, but i’d start by investigating the node itself to see why it’s busto.

what does kubectl get nodes show?

Alex Siegman

I wish the error was better than “volume node affinity conflict”

Alex Siegman

volume node affinity makes me think that some pod and some persistant volume out on EBS can’t connect to each other. A PV backed by EBS will limit a POD to a specific AZ. That AZ will match that of the worker that created/hosts the PV. Is somehow an EBS volume of a PV being created in the wrong AZ?

Alex Siegman
Kubernetes is not scaling up when volume node affinity requires a node in specific AZ (AWS) · Issue #75402 · kubernetes/kubernetes
What happened: K8s cluster: 1 master (us-west-2a) not schedulable 1 node (us-west-2c) Node labels Roles: node Labels: [beta.kubernetes.io/arch=amd64> <http://beta.kubernetes.io/instance-type=m5.xlarge beta.kubernetes.io/instance-type=m5.xlarge](http://beta.kubernetes.io/arch=amd64) beta.k…
Alex Siegman

not sure how you launched your 1 node in EKS, or why this would be an issue thereon

johncblandii

at hashiconf so was moving around, but catching up on the read

johncblandii

get nodes shows the node is healthy and not maxed out

johncblandii

it is just 1 node, though. i guess it is time to grow this a bit

johncblandii

ah, it does have a pvc. let me check the AZ

Maycon Santos

kubectl describe pv PV_NAME kubectl describe node NODE_NAME

Maycon Santos

check its Labels or VolumeId and ProviderID

Maycon Santos

could be a relaunch of your single node on another AZ

johncblandii

nailed it. pv in 2c and node in 2a

johncblandii

my other nodes aren’t joining the cluster anymore

johncblandii

not sure why my eks nodes no longer work on private IPs, but i had this node problem with another cluster too

johncblandii

got the nodes back and that resolved it

2019-09-05

Jonathan Le
Amazon EKS Cluster OIDC Issuer URL · Issue #9995 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Jonathan Le

Anyone wanna add their thumbs up to that issue? EKS IAM POD roles TF

4
Jonathan Le

It should hit 2.28.0 coming out in Thursday on the AWS Provider

2019-09-04

Erik Osterman
Introducing Fine-Grained IAM Roles for Service Accounts | Amazon Web Services

Here at AWS we focus first and foremost on customer needs. In the context of access control in Amazon EKS, you asked in issue #23 of our public container roadmap for fine-grained IAM roles in EKS. To address this need, the community came up with a number of open source solutions, such as kube2iam, kiam, […]

wow that’s noiceeee

    keyboard_arrow_up