#kubernetes (2020-04)
Archive: https://archive.sweetops.com/kubernetes/
2020-04-01
Anyone found a more automated way to roll k8s nodes thru replacement with terraform other than spinning up another asg and cordon/drain thru the old nodes before running tf again to remove them?
That video represents a what a deployment pipeline looks like that pushes a whole kubernetes cluster out then installs airflow on it to then push pipelines that run data science jobs to the same cluster.
This..is..amazing.
2020-04-02
Short script to get the latest version of minikube running on ubuntu 19.10: https://gist.github.com/zloeber/528bcce2e4b45465c940a08f10551ccb
2020-04-03
FleetOps -> https://thenextweb.com/growth-quarters/2020/04/03/devops-isnt-enough-your-team-needs-to-embrace-fleetops/ (pretty much another way of saying you should treat everything as if it were part of a PaaS I think).
FleetOps is needed to run a fleet of hundreds (or thousands!) of websites and applications securely across your organization.
and to follow that up, this nifty looking project from Rancher developed by a dude I follow on twitter: https://rancher.com/blog/2020/fleet-management-kubernetes/
Fleet is new open source project from the team at Rancher focused on managing fleets of Kubernetes clusters. Ever since Rancher 1.0, Rancher has provided a central control plane for managing multiple clusters. As pioneers of Kubernetes multi-cluster management, we have seen firsthand how users have consistently increased the number of clusters under management. We are already seeing interest from users who want to manage tens of thousands or even millions of clusters in the near future.
darn. I had at one point tried to build an internal cluster management tool and wanted to call it fleet. Because it was essentially managing a fleet of kube clusters (and keeping with the ocean/ship theme) never got around to buildint it out completely
Fleet is new open source project from the team at Rancher focused on managing fleets of Kubernetes clusters. Ever since Rancher 1.0, Rancher has provided a central control plane for managing multiple clusters. As pioneers of Kubernetes multi-cluster management, we have seen firsthand how users have consistently increased the number of clusters under management. We are already seeing interest from users who want to manage tens of thousands or even millions of clusters in the near future.
my proof
figures right? well fleet looks open source maybe you can use it anyway
what module do you use for Go log output anyway?
@Zachary Loeber i used https://github.com/sirupsen/logrus but i havent been doing a ton of Go development in the last few years so I’m prob not the best person to ask
Structured, pluggable logging for Go. Contribute to sirupsen/logrus development by creating an account on GitHub.
Hi all, weird question for ya
EKS 1.14. 1 cluster. 2 namespaces. Opened up SG (for debugging). amazon-k8s-cni:v1.5.7
Deployed svc + deployment in both namespaces. I have a pod from both namespaces on the same ec2 instance. I have a VPN giving me access to the cluster.
I can curl 1 pod in 1 namespace. I can not curl the other pod in the other namespace. All the k8s specs for svc + deployment are the same. They’re both using secondary IPs.
I realize this is hyper specific, but just curious if this sounds familiar to anyone
(I’ve tried to isolate it down to just 2 identical pods in different namespaces)
Guessing it’s related to some hardcore networking issue in the CNI.. I’m able to hit the pods from within the same VPC with the same CIDR block without issue.. but when I leave the CIDR block, it causes trouble
We’ve encountered something that sounds similar when the subnets aren’t correctly configured with route tables or the wrong subnets are passed to EKS. In this case, pod(1) is on node A, pod (2) is on node B; node A and node B are on different subnets.
do you have network policies? that could be different for both namespaces?
Erik: same node, same subnet
Btai: no NACLs in AWS. But I’m guessing you’re referring to k8s network policies.. uhhh no clue, but I’ll look for it
I want to try and look at CNI/SNAT failures or something.. but I trying to field for suggestions first before going down that rabbit hole
no network policies in k8s
Unless you are doing unconventional things, I would look for more obvious, user error type problems. Just based on my own experiences, I am usually at fault 99% of the time.
i agree. In this case.. I think i got wrapped around the axel in playing with Service Endpoints.. opposed to the service itself
I was hitting the endpoint defined here..
kubectl -n dev-1 describe svc my-svc-name | grep -i endpoints:
Instead of just making the service fully available where I need it
the technical issue still stands, but the need to solve it diminished.. since I’m just going to hit the service (as I should have all along)
2020-04-04
2020-04-05
2020-04-06
What happens when I type kubectl run? Contribute to jamiehannaford/what-happens-when-k8s development by creating an account on GitHub.
anyone happen to tinker with kpt yet? https://googlecontainertools.github.io/kpt/
Kubernetes configuration package management
2020-04-07
AWS EKS -> ALB Target Group with CNI question…
So on EKS, we have CNI enabled so each pod has an IP address ni the VPC Subnet. We have an ALB going directly to the Pods’ IP addresses. So if we have 50 pods, there are 50 entries in the target group.
Question: Has anyone spent time fine tuning Deregistration Delay in coordination with aws-alb-ingress-controller (for large deployments; many pods)?
EDIT1:
!!!example - set the slow start duration to 5 seconds alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=5 - set the deregistration delay to 30 seconds alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30
Hmm, this is suggesting 30s, but dunno if it’s battle tested
Depends wildly on your app. What’s the terminationGracePeriodSeconds
set for that app? Think of it like this: pod is alive and ready and serving requests. Pod gets notified to stop work. How long does it serve requests? What happens to in-flight requests? How does that affect the app?
For starting time, it’s the same problem in reverse. How deep are your aliveness and more importantly your readiness checks? How do you know a pod is ready to serve requests? Does it serve requests in the first 10 minutes with super-high latency cause it’s still populating some caches?
More importantly, do your pods get replaced often? If not, you may not even need to stress about this
They get replaced a few times day (multiple deployments a day). There’s 100+ pods in the deployment.
terminationGracePeriodSeconds: 30
i guess i need to do my homework more on when the deregistration delay timer begins
Yeah, sounds like you do need to worry about it
When a pod has to be replaced, the following flow happens:
• SIGTERM is sent to the pod. Apps should get that as “dude, I got a notice to stop work graciously so I will start doing that”. Finish in progress-work, try to clean up nicely, and so on. At this time the ALBs should be set so no new connections are sent to these podsw
• we wait for terminationGracePeriodSeconds
• SIGKILL is sent to the pod which kills all the containers inside by force
Meanwhile, ALB Ingress Controller runs a loop every say 10s and checks for any new pods or any new LB changes and updates the ALBs accordingly
All of these have to make out and kiss in sync
It helps a lot to draw this out and mock scenarios
re: make out and kiss in sync
https://67.media.tumblr.com/668927139d282654dee7df5b1f715f93/tumblr_inline_o21a2aH9He1szrmgb_500.gif
I’m on the same page with you with your analysis so far. The dark spot in my mind is how Deregistration Delay works after a pod is marked as terminating
(proc is killed by itself naturally by SIGTERM or forced by SIGKILL)
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html helps
Learn how to configure target groups for your Application Load Balancer.
actually, I think the TG stops routing to a terminating/deregistering instance
but the deregistration delay keeps the current connections alive for up to XXseconds
If that’s the case, no big deal
I think the most important thing is aws-alb-ingress-controller to update ASAP once a pod is marked as terminatng
so it can be marked as deregistering in the TG
When a pod is marked as Terminating, we’re in between that SIGTERM and SIGKILL limbo. We still have connections from client-ALB-pod.
Now, after Deregistration Delay the ALB forcefully kills all connections from the client to the pod.
That’s helpful if say your app cannot die gracefully if there are still active connections
Or at least that’s how I understand it based on the above link
yeah
makes sense
terminating grace period length > deregistration delay length
Since there’s a lag waiting for AWS alb ingress controller to tell the TG that a target is deregistering
I think that is correct, yup.
Also, don’t forget about the ALB Ingress Controller loop.
That happens every 10s I think. And if it ran just before your pod switched to Terminating…
Again, drawing and testing all the situations( or the most important ones) helps a lot
yeah, I agree
➜ ~ kubectl get nodes | grep fargate
Interesting seeing Fargate EKS assigning ec2 instances?
fargate-ip-xxx-xxx-xxx-xxx.ec2.internal
I just assume that everything runs on ec2 instances
2020-04-08
What happened: kubectl diff modify my deployements. What you expected to happen: I expect the diff command to not change my deployments ! How to reproduce it (as minimally and precisely as possible…
2020-04-09
Any opinions on kube-aws vs kops?
(for provisioning in AWS)
I created a cluster with kube-aws yesterday and it wasn’t too bad. Now getting recommendations to use kops from someone that used it 2 years ago
I used kops like 2 years ago as well, it seemed ok but if you are going to deploy managed clusters and still use cli scripts to do so eksctl seems the way to go.
I question the longevity of a solution based on such scripts though.
though kops can generate terraform configurations, cool beans - https://github.com/kubernetes/kops/blob/master/docs/terraform.md
Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management - kubernetes/kops
I believe the time for kops
on AWS has come and gone. It’s moving slower and alternatives have caught up. Now with AWS supporting fully managed node pools, EKS is the way to go.
We’ve switched over to deploying EKS for all new engagements.
Up until the managed node groups, I was on the fence as to the right way to go.
I’m curious if there are any workloads which you might recommend self-managed clusters for at this point?
EKS is not FedRamp compliant (yet) and so the recommendation (from AWS) is to run K8s manually on EC2 until compliance is reached. As a result, eksctl is out as an option
Also my Issue with EKS is that they lag super behind the k8s release cycle
(so has kops
historically)
Oh okay did not know. I am new to aws after all and never had to deal with unmanaged clusters
which version of k8s is running eks ?
1.15
ho that’s very old.
Yep. It also only was just added in march
This is the issue regarding that: https://github.com/aws/containers-roadmap/issues/487
Tell us about your request Support for Kubernetes 1.16 Changelog Release Announcement Which service(s) is this request for? EKS Tell us about the problem you're trying to solve. What are you tr…
2020-04-10
2020-04-14
Anyone have issues using
service.beta.kubernetes.io/aws-load-balancer-type: nlb
attached to their service.. for a bunch of services.. then all your security group rules get consumed on the EKS nodes SG?
no, but you’ve piqued my interest.
the SG rules for the NLB specifically are getting added to node-port-level? how were you specifying your SG rules for this NLB?
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-internal: true
service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-00000000
these are the only annotations I use
i want to disable the sg rule addition somehow
this is what its doing to the SG for the EKS nodes
interesting. i had not stumbled upon this yet but i haven’t been using security groups on my (public) NLB. great to know though.
these are private NLBs
hmm, i wonder if that makse a difference
All I can do is try
i suspect not. logically it makes sense to me though that if you’re applying a SG, it’s going to lock down the port on the node that’s frontending the service.
here’s the problem tho, if you don’t specify an SG, it’ll grab one anyways
grab | create |
Oh, the terminology is NodePort
We use the nlb
mode by default with that annotation. Haven’t been bothered by the rule additions. Why fight it?
we maxed out on inbound security group rules
lol
only 60 rules per SG
each NLB’s nodeport is making 2 entries in there
https://github.com/kubernetes/kubernetes/pull/74692/files#diff-298a224837f7a3edc5b5f37ddb8fa47aR671
this looks kind of promising tho
app-1 LoadBalancer 172.20.179.5 00000000000000-00000000000000.elb.us-west-2.amazonaws.com 3000:32043/TCP 10d
SG rules get added for like.. port 32043
(even though I already have rules that don’t require this..)
I guess the question is.. how can i stop these inbound rule additions on the SG used for the EKS nodes?
EDIT:
Solution.. just use classic LB
2020-04-16
I’ve seen two ingresses using same DNS domain but different paths and different nginx-ingress
annotations.
Is that supported? Will one ingress be used or will somehow nginx-ingress
merge them?
I’m not sure how will nginx
resolve paths when they overlap, ex one ingress is using /v4
and second /v4/api_xxx
.
I don’t believe that will work. one of the two load-balancers that back the ingress would need to be hit first based on how DNS works (unless you have some upstream traffic routing mechanism)
Anyone facing issue with Service name is not being picked up when deploying a helm chart on Kubernetes and service is getting created with random naming scheme??
what is random for you?
Here u go Pierre: https://github.com/helm/charts/issues/21973
charts/stable/grafana/values.yaml Line 115 in efd0f2c service: When using this Grafana Helm Cart for deploying into EKS Cluster, I did added a service name and somehow, then ma his not being picked…
This is what I am talking about
but it is indented for you ?
Yeah yea of course… Within my myvalues.yaml file its indented right
This is how the service name gets defined you need to set fullnameOverride
Ok… SO the service name is defined with an override. In this case defining the “grafana.fullname” within service section should fix the issue…
Am i saying it right?
no you can not set the service name on its own.
You can only override the name for all manifests
I am wondering why you would not just take the default?
Well, the reason why I dont want to take the defalut is, I am having an issue when I am setting the Ingress for the same (Grafana Service). I do see a name mismatch here cause I need to define the service name within the Ingress configuration before I deploy the service
Why would you not enabled the ingress of the helm chart?
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
service:
name: svc-grafana
namespace: kube-system
type: ClusterIP
port: 80
targetPort: 3000
annotations: {}
labels: {}
portName: service
ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/load-balancer-attributes: ‘routing.http2.enabled=true,idle_timeout.timeout_seconds=600,deletion_protection.enabled=true’ alb.ingress.kubernetes.io/certificate-arn: certname alb.ingress.kubernetes.io/listen-ports: ‘[{“HTTP”: 80}, {“HTTPS”:443}]’ alb.ingress.kubernetes.io/actions.ssl-redirect: ‘{“Type”: “redirect”, “RedirectConfig”: { “Protocol”: “HTTPS”, “Port”: “443”, “StatusCode”: “HTTP_301”}}’ name: grafana-ingress namespace: kube-system service: annotations: alb.ingress.kubernetes.io/target-type: ip labels: {} path: /* hosts: - grafana.company.com ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. extraPaths: - path: backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: serviceName: svc-grafana servicePort: 80
"service:
name: svc-grafana
namespace: kube-system
type: ClusterIP
port: 80
targetPort: 3000
annotations: {}
labels: {}
portName: service
ingress:
enabled: true
annotations:
[kubernetes.io/ingress.class](http://kubernetes.io/ingress.class): alb
[alb.ingress.kubernetes.io/scheme](http://alb.ingress.kubernetes.io/scheme): internet-facing
[alb.ingress.kubernetes.io/load-balancer-attributes](http://alb.ingress.kubernetes.io/load-balancer-attributes): 'routing.http2.enabled=true,idle_timeout.timeout_seconds=600,deletion_protection.enabled=true'
[alb.ingress.kubernetes.io/certificate-arn](http://alb.ingress.kubernetes.io/certificate-arn): certname
[alb.ingress.kubernetes.io/listen-ports](http://alb.ingress.kubernetes.io/listen-ports): '[{"HTTP": 80}, {"HTTPS":443}]'
[alb.ingress.kubernetes.io/actions.ssl-redirect](http://alb.ingress.kubernetes.io/actions.ssl-redirect): '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
name: grafana-ingress
namespace: kube-system
service:
annotations:
[alb.ingress.kubernetes.io/target-type](http://alb.ingress.kubernetes.io/target-type): ip
labels: {}
path: /*
hosts:
- [grafana.company.com](http://grafana.company.com)
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths:
- path:
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: svc-grafana
servicePort: 80"
can you wrap that in
`
``
service:
name: svc-grafana
namespace: kube-system
type: ClusterIP
port: 80
targetPort: 3000
annotations: {}
labels: {}
portName: service
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=600,deletion_protection.enabled=true'
alb.ingress.kubernetes.io/certificate-arn: certname
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
name: grafana-ingress
namespace: kube-system
service:
annotations:
alb.ingress.kubernetes.io/target-type: ip
labels: {}
path: /*
hosts:
- grafana.company.com
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths:
- path:
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: svc-grafana
servicePort: 80
Sorry,, here u go
and where are you trying to reference the service name?
If u look at the “name” under Service configuration and the “servicename” under Ingress configuration, the parameters are different and thats why I want to control the service name so that it can be set under the Ingress.
I am trying to refer the servicename under:
service:
name: svc-grafana
you do not need to define the grafana path afaik
After I deploy the configuration, here is the error I am getting in the alb-ingress-configuration pod logs:
E0416 19:39:58.180001 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to load serviceAnnotation due to no object matching key \"kube-system/svc-grafana\" in local store" "controller"="alb-ingress-controller" "request"={"Namespace":"kube-system","Name":"grafana-1587065956"}
You should be able to remove the grafana service declaration
ho….
So, I am guessing to get rid of the whole section:
extraPaths:
- path:
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: svc-grafana
servicePort: 80
no need for it then….
I am not sure why you have added it.
So it might be yes
I was configuring it based on templates I got from github and AWS ALB Ingress sections, Pierre…
Let me remove it and will see if it deploys
sure let me know. Happy to help.
This time, it throwed a new error:
I0416 20:00:22.317225 1 tags.go:43] kube-system/grafana-1587067180: modifying tags { ingress.k8s.aws/stack: "kube-system/grafana-1587067180", kubernetes.io/service-name: "grafana-1587067180", kubernetes.io/service-port: "80", ingress.k8s.aws/resource: "kube-system/grafana-1587067180-grafana-1587067180:80", kubernetes.io/cluster/cluster_name: "owned", kubernetes.io/namespace: "kube-system", kubernetes.io/ingress-name: "grafana-1587067180", ingress.k8s.aws/cluster: "cluster_name"} on arn:aws:elasticloadbalancing:AWSSetup
ooh sorry, nevermind. Thats not an error. However, the original error still persists
hard to say / judge what might be going on ..
E0416 20:00:22.364011 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to grafana-1587067180 service is not of type NodePort or LoadBalancer and target-type is instance" "controller"="alb-ingress-controller" "request"={"Namespace":"kube-system","Name":"grafana-1587067180"}
sorry Pierre. the above one is the error I am getting
I will dig in more…
The error says that you can not use server.type: ClusterIP
so if you would like to use a load balancer you have to change the type
So it might work if you set it to: LoadBalancer
But, without defining the ClusterIP, how does the service will be created with the proper setup for exterrnal or even internal accessing??
either way, let me try to change the server.type and will see what it does
Actually, I am guessing I need to add this
service.loadBalancerIP IP address to assign to load balancer (if supported) nil
you can not create an ALB for a ClusterIP Service if you would like to use a Load Balancer you will need to switch the service type.
ooh wow..
ok ok
let me change the server type to loadbalancer and see what it does then
Son a gun… It worked…
Boy, u r amazing…!!!
I truly apprecate your help here Pierre!!!!
no worries
2020-04-17
curious if anyone has taken a look at Keptn yet, https://keptn.sh/
Building the fabric for cloud-native lifecycle automation at enterprise scale
Tell us about your request I would like to be able to make changes to configuration values for things like kube-controller. This enables a greater customisation of the cluster to specific, bespoke …
I found that all of my EKS clusters that were originally created on 1.11 are missing k get cm -n kube-system kube-proxy-config
. The configmap is present on clusters created on later versions. The EKS update instructions only patch the image version in kube-proxy. Has anyone else dealt with this? I’m digging into it because I want to edit the metricsBindAddress
to allow Prometheus to scrape kube-proxy
.
2020-04-19
I’m running into a bit of confusion. Does anything look glaringly out of place here?
For some reason, creating the internal NLB in AWS with the below yaml is using nodePort
s. Is this normal? Trying to make spinnaker accessible over transit gateway but having difficulty
you should be using an IP address within the kubernetes network range right?
2020-04-20
Just pretty waves and a single link to a github project
Hello,
I am facing a dilemma that I am sure other folks must have come across.
So we have an application team deploying their service to our shared EKS cluster. The application is exposed externally via a CLB (this will be revisited in a month or so to replace with an API gateway etc.). The challenge I am facing is that the DNS and the Cert that this service manifest refers must be created via TF. Looks like there’s no way to tell a K8s service to use a particular LB as it’s load balancer. We have to go the other way round. Create the LB and refer that in TF to find the DNS details. This fails too so far. I am using aws_lb as a datasource and trying to read the zone id of the LB created by the K8s service. How have others solved for this please ?
Got totally sidetracked today and ended up creating this little project. Setting up a local lab environment in Linux for CKA studies using terraform and libvirt: https://github.com/zloeber/k8s-lab-terraform-libvirt. It is just a nifty way to spin up 3 local ubuntu servers using terraform but fun nonetheless (well fun for me at least…)
A Kubernetes lab environment using terraform and libvirt - zloeber/k8s-lab-terraform-libvirt
this is cool! Maybe also something for #community-projects
A Kubernetes lab environment using terraform and libvirt - zloeber/k8s-lab-terraform-libvirt
thanks Pierre, I was surprised at how well it works
2020-04-21
Helm/Stable/Prometheus Server Dashboard is exposed using alb-ingress controller. Somehow the prometheus webpage is not loading fully (few parts of the webpage are not getting loaded and throwing 404 errors). Here is the Ingress configuration
ingress:
## If true, Prometheus server Ingress will be created
##
enabled: true
## Prometheus server Ingress annotations ## annotations: kubernetes.io/ingress.class: ‘alb’ #kubernetes.io/tls-acme: ‘true’ alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/load-balancer-attributes: ‘routing.http2.enabled=true,idle_timeout.timeout_seconds=60’ alb.ingress.kubernetes.io/certificate-arn: certname alb.ingress.kubernetes.io/listen-ports: ‘[{“HTTP”: 80}, {“HTTPS”:443}]’ alb.ingress.kubernetes.io/actions.ssl-redirect: ‘{“Type”: “redirect”, “RedirectConfig”: { “Protocol”: “HTTPS”, “Port”: “443”, “StatusCode”: “HTTP_301”}}’ service: annotations: alb.ingress.kubernetes.io/target-type: ip labels: {} path: /* hosts: - prometheus.company.com
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. extraPaths: - path: /* backend: serviceName: ssl-redirect servicePort: use-annotation
ingress:
## If true, Prometheus server Ingress will be created
##
enabled: true
## Prometheus server Ingress annotations
##
annotations:
[kubernetes.io/ingress.class](http://kubernetes.io/ingress.class): 'alb'
#[kubernetes.io/tls-acme](http://kubernetes.io/tls-acme): 'true'
[alb.ingress.kubernetes.io/scheme](http://alb.ingress.kubernetes.io/scheme): internet-facing
[alb.ingress.kubernetes.io/load-balancer-attributes](http://alb.ingress.kubernetes.io/load-balancer-attributes): 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60'
[alb.ingress.kubernetes.io/certificate-arn](http://alb.ingress.kubernetes.io/certificate-arn): certname
[alb.ingress.kubernetes.io/listen-ports](http://alb.ingress.kubernetes.io/listen-ports): '[{"HTTP": 80}, {"HTTPS":443}]'
[alb.ingress.kubernetes.io/actions.ssl-redirect](http://alb.ingress.kubernetes.io/actions.ssl-redirect): '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
service:
annotations:
[alb.ingress.kubernetes.io/target-type](http://alb.ingress.kubernetes.io/target-type): ip
labels: {}
path: /*
hosts:
- [prometheus.company.com](http://prometheus.company.com)
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
ingress:
## If true, Prometheus server Ingress will be created
##
enabled: true
## Prometheus server Ingress annotations
##
annotations:
kubernetes.io/ingress.class: 'alb'
#kubernetes.io/tls-acme: 'true'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60'
alb.ingress.kubernetes.io/certificate-arn: certname
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
service:
annotations:
alb.ingress.kubernetes.io/target-type: ip
labels: {}
path: /*
hosts:
- prometheus.company.com
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
Sorry for the mishap
Anyone gone through this issue before fellas?
what’s the address for prometheus-server or grafana? configured as? does it match the url you’re using to hit the alb? if you look at inspect and see what the request host and uri is of the assets not being loaded, are you requesting the right resource?
DO u mean the “hosts” section, Joey?
hosts:
- prometheus.company.com
I just checked and I see the domain url I used under “hosts” section is the one I used and its the one beoing loaded
However there are multiple redirects are happening
no, i mean the prometheus server dashboard or whatever service it is you’re hitting when you hit that ingress
i’m just wondering if the things that aren’t loading aren’t loading because you’re getting an incorrect url
Yes, the prometheus server dashboard will be accessable by the url defined in hosts section and thats how you access it
And thats where the issue is
prometheus server dashboard is not getting loaded fully
if you open inspect mode in chrome or ff or whatever browser you’re using, for the objects that are not being loaded, is the host being requested the same as all the other assets?
Yes, I used the developer tools and verified the domain names and its all using the proper domain name
I fixed the issue… thanks joey
what was it?
hi, any idea how can I change language of Minikube CLI? Probably it gets the settings from my locale settings (PL), but I’d like to force english.
2020-04-22
What’s your opinion of https://fission.io?
Fission is a framework for serverless functions on Kubernetes. Write short-lived functions in any language, and map them to HTTP requests (or other event triggers). Deploy functions instantly with one command. There are no containers to build, and no Docker registries to manage.
Follow
Fission is a framework for serverless functions on Kubernetes. Write short-lived functions in any language, and map them to HTTP requests (or other event triggers). Deploy functions instantly with one command. There are no containers to build, and no Docker registries to manage.
I’ve recieved some interesting comments in serverless-forum.slack.com about this
2020-04-23
Guys, I took over some k8s that I need to adjust. I see bunch of env variables. Its like 50+ per deployment manifest. I dont work so much with kubernetes, but it looks like overkill to me. What is best practice, should it be abstracted with config maps, any other recomendation or that approach is good?
When is it too much… 5? 10? 100?
It’s hard to say without knowing what those variables all mutate, which I assume is what they do.
If variables are mostly the same across many different deployments, having something that “generates” them on the fly based on some external source could be an abstraction which may or may not suit your taste …
If you prefer explicitness to carry all the way into the deployment manifests however, you probably want the dynamic generation to happen outside of Kubernetes - in some form of manifest generation..
I may be biased, but I really like chamber, regardless of Kubernetes or not.
https://github.com/segmentio/chamber
chamber exec path/to/global/variables path/to/deployment/specific/variables -- my-binary
CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.
If you convert the individual env vars into a config map your eyes will thank you when you have to look over the deployment manifests
Plus, you can then look at controllers to auto restart your deployments when/if the configmaps change
That I can agree with for sure .
chamber looks sweet, too bad it is provider specific
also.. you can combine it from several configmaps, right?
And then you could (if several deployment shares “environment globals” so to speak) - combine environment variables in a nice way. I think it’s described in the doc somewhere, let me see.
Totally
weirdly enough there are cases where you can benefit from multiple config maps
Thinking about something specific other than this “hierarchical” combination of globals/env/service?
Nah, it technically boils down to your succinct statement
I had one where the base deployment was an app that needed some variables based on the cluster which was getting deployed within the pipeline but they later wanted to push out specific updates to config elements that were client specific
so, yeah, basically hierarchical combo
Thanks Guys, it was really helpfull
@Zachary Loeber Did you use any controller which tracks config-maps/secrets change and restart pods if there is a change?
I did not unfortunately, seems pretty easy to do though.
in lower environments I just didn’t see the need (my pipelines would always push more recent deployments based on either build id or git commit hash tagged containers)
I was able to utilize this one: https://github.com/pusher/wave
Kubernetes configuration tracking controller. Contribute to pusher/wave development by creating an account on GitHub.
maybe there are alternatives
This could be promising -> https://www.kubestack.com/
Open source Gitops framework built on Terraform and Kustomize.
2020-04-24
k8s-deployment-book, uses kustomize and kubecutr (a custom kube scaffolding tool by the same author) which may not be everyone’s thing but still worth a once over anyway as it is well thought out -> https://github.com/mr-karan/k8s-deployment-book
Kubernetes - Production Deployments for Developers (Book) - mr-karan/k8s-deployment-book
Hi ppl! What do you use to keep your secret well… secrets … when it comes to your yaml files stored in a repo? Do you store them elsewhere? Do you use tools like sealed-secrets, helm-secrets or Kamus?
store your secrets in vault and grab them on startup
Im not familiar with vault. Is that something that runs in k8s?
Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, auditing, and provides secrets as a service through a unified API.
If that runs on k8s, does that mean vault becomes the source of truth and you dont keep a copy anywhere?
I dont want to commit the secrets in git (unless they are properly encrypted)
the way i’ve done it is running vault in a separate cluster/as it’s own piece of infrastructure, and then in my pods i have a vault agent init container that authenticates to vault, grabs secrets, and passes them in files or as environment variables to relevant containers
it’s not a trivial exercise, but it’s clean and can be provider agnostic, as opposed to using secret manager from $cloud_provider
Im not as concerned about them being secured in k8s as having a secure copy elsewhere…
Im using help charts for my apps…
so I have a configmaps.yaml in that chart
have you checked out git-crypt for storing encrypted secrets in git?
Not yet. I looked at sealed-secrets, helm-secrets or Kamus so far. I’ll check that out
thanks
Also using Hashicorp Vault and very happy with it, though pulling secrets into static files or environment variables with Kustomize has gotten a bit complicated.
I’m using AWS SSM Parameter Store ( https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) which I like as it gives me a source of truth outside of the cluster.
We’re currently switching over to using helmfile (https://github.com/roboll/helmfile) which has built in support for retrieving values from SSM parameter store (and other systems, including vault) by using vals (https://github.com/variantdev/vals)
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management.
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Helm-like configuration values loader with support for various sources - variantdev/vals
nice, hadn’t seen vals.
Thanks
Sure, let me know if you decide to look into any of that stuff and have questions. I’m still pretty new to helmfile myself but happy to help if I can.
As for vault, someone just released some Terraform to deploy it on AWS: https://github.com/jcolemorrison/vault-on-aws
A secure Vault for secrets, tokens, keys, passwords, and more. Automated deployment with Terraform on AWS. Configurable options for security and scalability. Usable with any applications and se…
It is quite complicated though, so the folks at Segment created chamber as a result: https://github.com/segmentio/chamber
CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.
When on Azure and without a centralized Hashicorp Vault deployment I leaned on using keyvault with this project to auto-inject secrets using mutating admission webhooks: https://github.com/SparebankenVest/azure-key-vault-to-kubernetes
Azure Key Vault to Kubernetes (akv2k8s for short) makes it simple and secure to use Azure Key Vault secrets, keys and certificates in Kubernetes. - SparebankenVest/azure-key-vault-to-kubernetes
the concept is similar for other operators as well though (hashicorp’s vault operator does the same I believe). That way you are not putting your secrets anywhere at all except in your secret store and pulling them into deployments if the cluster is authorized to do so.
or you can pre-seed cluster secrets as well I suppose. I’ve done this as well but you are then pushing the responsibility for secrets deployment to an upstream pipeline (the one that creates your cluster generally).
How do you authenticate to vault and how to store those credentials?
service accounts
How do you protect/secure it?
I am working last couple od days with GoDaddy External secret implementation. You can integrate aws secret manager, parameter store or even vault with it.
I am pretty happy so far
Does anyone have any stack graph-esque minikube development flows that they would recommend?
We’re using ArgoCD + smashing the sync button and I’ve looked at how https://garden.io/#iterative figured out smart local redeployments but I’d like to know how others are doing it and if our (certmanager->vault->mysql + elasticsearch) -> the actual app local dev deployment is abnormally complex or slow. (currently takes three syncs and ~8 minutes to go from minikube up to running.)
Garden | Faster Kubernetes development and testing |
2020-04-25
2020-04-26
2020-04-27
anyone have to fine tune Nginx for performance in k8s?
worker_processes 2;
events {
worker_connections 15000;
}
For a 2 CPU container, with 65k file descriptor limit.. thinking this would be safe. I have a generous k8s HPA also, so maybe fine tuning is a frivolous exercise
2020-04-28
2020-04-30
btw: https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md
Seems like ingress-nginx got a couple of bigger updates in the last days.
NGINX Ingress Controller for Kubernetes. Contribute to kubernetes/ingress-nginx development by creating an account on GitHub.
@Pierre Humberdroz thanks for sharing
NGINX Ingress Controller for Kubernetes. Contribute to kubernetes/ingress-nginx development by creating an account on GitHub.
that is a long list
anything jump out at you? mostly looks like bug fixes to me. no enhancements stand out that I want to try
Helm chart stable/nginx-ingress is now maintained in the ingress-nginx repository
According to https://github.com/kubernetes/ingress-nginx/issues/5161 there is documentation in the works for migrating from chats/stable to the new location.
Rename chart to ingress-nginx add common labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx default backend should be disabled by default webhook should be enabl…
yea stable/incubator helm repo are not longer supported
Wow, I hadn’t seen that yet. That’s a big change.
Is the a method to promote objects across api versions? Eg deployments have moved from extensions/v1beta1
to apps/v1. Can t
hey be updated in place or do they need to be destroyed and recreated?