#kubernetes (2019-09)
Archive: https://archive.sweetops.com/kubernetes/
2019-09-04

https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/ via @Andy Miguel

Here at AWS we focus first and foremost on customer needs. In the context of access control in Amazon EKS, you asked in issue #23 of our public container roadmap for fine-grained IAM roles in EKS. To address this need, the community came up with a number of open source solutions, such as kube2iam, kiam, […]

wow that’s noiceeee
2019-09-05

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…

Anyone wanna add their thumbs up to that issue? EKS IAM POD roles TF

It should hit 2.28.0 coming out in Thursday on the AWS Provider
2019-09-10

I’m not learned in the area of k8s scheduling so this is destroying my day. LOL. Anyone have any helpful insights?
Warning FailedScheduling 50s (x9 over 8m7s) default-scheduler 0/1 nodes are available: 1 node(s) had volume node affinity conflict.

This is a 1-node cluster on EKS

the scheduler is trying to tell you it has no nodes to work on, I believe. I’m no expert here either, but i’d start by investigating the node itself to see why it’s busto.
what does kubectl get nodes
show?

I wish the error was better than “volume node affinity conflict”

volume node affinity makes me think that some pod and some persistant volume out on EBS can’t connect to each other. A PV backed by EBS will limit a POD to a specific AZ. That AZ will match that of the worker that created/hosts the PV. Is somehow an EBS volume of a PV being created in the wrong AZ?

not sure if applicable: https://github.com/kubernetes/kubernetes/issues/75402
What happened: K8s cluster: 1 master (us-west-2a) not schedulable 1 node (us-west-2c) Node labels Roles: node Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=m5.xlarge beta.k…

not sure how you launched your 1 node in EKS, or why this would be an issue thereon

at hashiconf so was moving around, but catching up on the read

get nodes
shows the node is healthy and not maxed out

it is just 1 node, though. i guess it is time to grow this a bit

ah, it does have a pvc. let me check the AZ

kubectl describe pv PV_NAME kubectl describe node NODE_NAME

check its Labels or VolumeId and ProviderID

could be a relaunch of your single node on another AZ

nailed it. pv in 2c and node in 2a

my other nodes aren’t joining the cluster anymore

not sure why my eks nodes no longer work on private IPs, but i had this node problem with another cluster too

got the nodes back and that resolved it
2019-09-11
2019-09-12

Anyone done multi region EKS?

Not but I was doing some research

To me it looks like it’s possible with a Latency record combined with ExternalDNS

but that’s possible if this PR is merged

Route53 on AWS supports "latency-based routing" for DNS records. You can have multiple DNS records for the same hostname, having different ALIAS to regional ELBs. This is usually the pref…

I was doing a blog post about it

and decided to cut it short at the DNS/app level.
2019-09-16

Anyone use https://github.com/kubernetes-sigs/aws-efs-csi-driver yet? I’m having some trouble getting the volume mounted.
CSI Driver for Amazon EFS https://aws.amazon.com/efs/ - kubernetes-sigs/aws-efs-csi-driver
2019-09-18

This is neat: https://deliverybot.dev/2019/09/15/deploying-to-kubernetes-with-helm-and-github-actions/
This tutorial will go through the basics of GitHub actions as well as deploying to Kubernetes using a pre-built Helm action


Authors: Kubernetes 1.16 Release Team We’re pleased to announce the delivery of Kubernetes 1.16, our third release of 2019! Kubernetes 1.16 consists of 31 enhancements: 8 enhancements moving to stable, 8 enhancements in beta, and 15 enhancements in alpha. Major Themes Custom resources CRDs are in widespread use as a Kubernetes extensibility mechanism and have been available in beta since the 1.7 release. The 1.16 release marks the graduation of CRDs to general availability (GA).

Kubernetes event reporter for Sentry. Contribute to getsentry/sentry-kubernetes development by creating an account on GitHub.

2019-09-24


hi guys,
quick question….
i have 2 configMap, A and B, and i have a bunch of key value in both situations, but, in the middle of this bunch i have some key name duplicated with values differents, when configure the specification using envFrom in kuberentes, i call configMap A and in the sequencia configMap B.
Which value will persistent in my container when the pod going up and running ?

I’m guessing the last one wins. But this seems straight forward enough to just test and find out. I don’t actually know off the top of my head

yeah @rms1000watt you are right, i just took a test now, and the last one wins….
i had this question because in the project that i am working one developers want to use the same environment variable with values different.
i know it was not possible, but never had some tests… i suggested use a prefix when call configMap…
- prefix: VALUE_
configMapRef:
name: configmap
and treat this situation in the code…

Have you ever heard of anything (in EKS) where you containers just sit idle? Like, we had a 10 minute period in time where the containers didn’t do anything (no logs, no http responses). Datadog said the containers were still running and consuming memory.. but just no application activity. When we looked at the container logs.. it’s like nothing happened.. i mean, there was a 10 minute gap in time between logs, but other than that.. it looked normal

Are logs/responses the only metrics you are reviewing? You said Datadog shows memory consumption but does CPU or memory for that same time period show any changes or is it literally flat?

Do you have request metrics (rate, errors, etc.) for this time period?

the datadog metrics are flat, but non-zero.. idling around 1-5% in both cases

request metrics, yea. Like, the ALB gets a ton of 4xx

since the containers aren’t returning HTTP results

Sounds like they aren’t getting any requests either though

right

ALB health graph looks like …?

like, i wouldn’t be surprised if the readiness probe failed.. so the k8s service stopped routing requests.. but then the livlieness probe would have died too and caused a restart.. which we had 0 of

lemme look at ALB graph

like.. an order of magnitude more 4xx and 5xx

but health seemed fine, unless I’m not looking at the right spot

You’ll have to look at Target Groups monitoring specifically for [Un]Healty Hosts metric

And how are you configuring your ALB(s)? Directly? Indirectly with a Kubernetes controller like alb-ingress-controller?


alb-ingress-controller

Check that controller’s logs

Maybe an reconciliation loop is stuck from bad config and triggering every few minutes?

hmmm

i can look at that

this only happened once out of like.. a few months of having alb-ingress-controller

@Cameron Boulton you’re a god

yea alb-ingress-controller

11 minute gap



this is amazing work

Yea I think alb-ingress-controller reconsil loop is 10 mins +/- some seconds (imprecise scheduler)

I’ll send my consultation bill to Calm, Attention: Ryan Smith

so whyy in the heck has this never happened before


hahahaha nice


Ha

Well, depending on what the logs are showing you that it can’t reconcile; maybe this is the first time someone pushed a config change that it couldn’t handle.

what’s lame is the the deployments only change the k8s deployment image

which trigger a deploy

so like.. uhh.. there shouldn’t really have been anything gnarly that killed it

Maybe you’ve hit a bug/are on an older version of alb-ingress-controller?

ah.. guessing older version

[docker.io/amazon/aws-alb-ingress-controller:v1.1.2](http://docker.io/amazon/aws-alb-ingress-controller:v1.1.2)

yeah, a patch behind

I’m skeptical that’s it then

What are the logs telling you that the controller is doing/failing?

only info level logs.. but they look like..
I0924 18:55:33.158007 1 targets.go:95] service-a: Removing targets from arn:aws:elasticloadbalancing
to remove a big chunk of them.. then
I0924 19:06:30.101251 1 targets.go:80] service-a: Adding targets to arn:aws:elasticloadbalancing
adding the big chunk (11min later)

and it was during this timeframe all the activity ceased


How recently was that version deployed?

the pod alb-ingress-controller
is 43 days old

Any chance you’ve been experiencing this issue for 43 days or is it newer?

brand spanking new

caused an outage.. alerted everyone

would have known if it happened before

Okay

How old is/are the Ingress(es) that are annotated for this controller?

the ing is 154d old

Actually, that’s creation time not last modified so nevermind

lol whoops

If you describe your Ingress(es) do you see anything under the events?

<none> events

Hmm

stupid question.. should probably just google it.. but do you have a replica > 1 when you run alb-ingress-controller

Maybe keep going back in your Ingress Controller logs until you find something else or the start of the “Removing/Adding targets” loop?

Did you launch the alb-ingress-controller into a new namespace/cluster recently?

Theres just some detaching and attaching of SGs to ENIs
I0924 18:55:35.660049 1 instance_attachment.go:120] service-a: detaching securityGroup sg-redacted from ENI eni-redacted

hmmm, yeah, but different cluster and different sub account

this one has been stable for a few months

The new cluster/sub-account recent?

but this is the 11 min gap.. and the logs

I0924 18:55:59.734800 1 instance_attachment.go:120] service-a: detaching securityGroup sg-readacted from ENI eni-redated
I0924 19:06:30.101251 1 targets.go:80] prod/app-api: Adding targets to arn:aws:elasticloadbalancing

nothing in between

unless EKS had a fart or something for a little

I0924 19:06:30.101251
is the beginning/first instance of “Adding targets” loop?

new cluster/sub account. yeah, but not today.. like a week ago

I0924 18:55:33.158007 1 targets.go:95] service-a: Removing targets
I0924 18:55:58.344701 1 targets.go:95] service-a: Removing targets
I0924 19:06:30.101251 1 targets.go:80] service-a: Adding targets

What are the “args” key for the only container in the alb-ingress-controller pod (if any)?

- args:
- --cluster-name=k8s
- --ingress-class=alb

Okay, so if you have this controller running anywhere else on this cluster or any other cluster in the same account and that one is also using --cluster-name=k8s
the controllers are going to fight over ALBs/Target Groups.

Any chance that’s possible?


yeah.. 1 cluster in the account

incubator/aws-alb-ingress-controller

helm deployed

And no possibility of it in another namespace or something?

➜ ~ kubectl get pods --all-namespaces | grep alb
default alb-aws-alb-ingress-controller-6b9cfd997f-b99zz 1/1 Running 1 43d

afk for a smidge.. father duties

i really appreciate all your help on this btw. you’re a rare breed and it’s incredibly invaluable

Sure thing. I think that’s all I can spare today though. The behavior you describe sure feels like an solation failure/reconciliation competition.
2019-09-26

Setting Kubernetes requests and limits effectively has a major impact on application performance, stability, and cost. And yet working with many teams over the past year has shown us that determining the right values for these parameters is hard. For this reason, we have created this short guide and are launching a new product to help teams more accurately set Kubernetes requests and limits for their applications.

Wow. I had just begun research to make a tool that does exactly this. Saves me some time, I guess. Thanks for the post.
Setting Kubernetes requests and limits effectively has a major impact on application performance, stability, and cost. And yet working with many teams over the past year has shown us that determining the right values for these parameters is hard. For this reason, we have created this short guide and are launching a new product to help teams more accurately set Kubernetes requests and limits for their applications.

Do you use helmfile
by chance?

Planning on using it more. Why do you ask?

Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles

here’s how we deploy kubecost


Thanks, I’ll take a look
2019-09-27
2019-09-28

has anyone used kubeless before?