#kubernetes (2020-12)
Archive: https://archive.sweetops.com/kubernetes/
2020-12-01
Spot support in Managed Node Groups for EKS: https://aws.amazon.com/blogs/containers/amazon-eks-now-supports-provisioning-and-managing-ec2-spot-instances-in-managed-node-groups/
^ I know this was discussed here a couple of times with people saying it was a blocker
This post was contributed by Ran Sheinberg, Principal Solutions Architect and Deepthi Chelupati, Sr Product Manager Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to run upstream, secure, and highly available Kubernetes clusters on AWS. In 2019, support for managed node groups was added, with EKS provisioning and managing the underlying EC2 Instances (worker […]
I’ve always ignored managed node groups because of the lack of spot support… does anyone use managed node groups with custom CNI configuration? Do managed nodes come with SSM out of the box?
This post was contributed by Ran Sheinberg, Principal Solutions Architect and Deepthi Chelupati, Sr Product Manager Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to run upstream, secure, and highly available Kubernetes clusters on AWS. In 2019, support for managed node groups was added, with EKS provisioning and managing the underlying EC2 Instances (worker […]
2020-12-02
https://get-kbld.io/ -> this and all the carvel tooling may be interesting to keep an eye on.
Will you be on #office-hours today?
Sorry I so wanted to be
Almost done with a spurt of work that should make me feel like things are mostly over a hump
I promise to derail and otherwise chaosmonkey up your office hours again soon
2020-12-04
Stupid question here… When an image is pulled by the Kubelet, is this done with the default
service account or is it done with the whatever service account is specified on the pod (default
when there is nothing specified on the pod)? I’m wondering if all service accounts need image pull secrets setting or just the default service accounts
Stupid answer here… https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-image-pull-secret-to-service-account - the docs suggest that pods are patched with imagePullSecrets
from the service account the pod is using, but it isn’t explicit, and lead me to initially believe that patching the default sa was all that was needed… That is not true.
After some testing, I have found that all service accounts need patching with imagePullSecrets
for them to be applied to the pods using those service accounts.
This patching is done by an admission controller and the code responsible for patching is here: https://github.com/kubernetes/kubernetes/blob/c6f7fbcfbc69120934ed87c5ac701bd1890347a3/plugin/pkg/admission/serviceaccount/admission.go#L193-L198
Does anyone else feel that kube-system
gets overused sometimes? What are people’s strategies for installing system related tools like cluster-autoscaler
, kube-downscaler
and other operators / controllers? Single namespace? Namespace per controller? Something else?
I tend to default to using a new namespace, it makes it easier to keep things organized.
For sure. Try keep everything out
yeah, I put nothing in there that doesn’t need to be in there
same as default
I keep simple cluster operations tooling in kube-system. Exceptions are operators and controllers with a great deal of Kubernetes objects. Each to its own.
2020-12-08
I am working on the upgrade implementation for a legacy application we host in kubernetes. Part of the upgrade procedure is going to require manipulation of k8s resources (configmaps, potentially ingress resources) at strategic points during the upgrade lifecycle. I am planning on using helm hooks running Jobs to do this; my question/concern is: is bad practice to have a pod manipulating k8s resources? If it’s not, what is the best way to accomplish it - just have kubectl available within the pod?
We currently build our product on AWS and we’re looking to also support GCP. We use ECS backed by EC2, and using GCP means moving to K8s. I know it’ll be a lot of work (a lot) so we first want to get a 10,000-ft view by mapping all our current AWS concepts to their GCP/K8s equivalents.
Anybody have pointers to useful guides on this conversion?
IMO, move to k8s first while staying on AWS. Kubernetes is a fantastic abstraction layer.
Once you are running on kubernetes, the location of where that kubernetes is running doesn’t mean as much
What do you mean by “support GCP”?
Multi-cloud (that is, running the same workload across multiple cloud providers in a completely agnostic way) is absolutely something you need to be focusing on—at least, according to two constituencies: Declining vendors that realize that if you don’t go multi-cloud, they’ll have nothing left to sell you. AWS isn’t going to build a multi-cloud dashboard, […]
It’s just like how Databricks deploys into AWS and Azure. For enterprise clients we deploy into their VPC, and currently only AWS is supported. If we use k8s then that seems like a good way to support all major cloud providers.
I like the idea to move to k8s first while staying on AWS. Good call
I’m setting up Istio over eks. Wanted to ask what’s the best strategy to have an encrypted tls connection between a client outside the cluster and a pod (ingress). I’m managing certificates on AWS ACM and it seems that elb has support for that using annotations, but according to my understanding that’ll lead to an unencrypted traffic between the elb and istio gateway. Any opinion about that would be extremely helpful.
Not sure which part of your statements where actual questions so I’ll answer two of them; ELB supports ACM certificates, yes.
And ALB target groups can support both HTTP and HTTPS. If you truly require HTTPS, then you can be comfortable (or not? ) knowing no certificate validation is done, so you can install a self-signed cert any way you please.
Learn how to configure target groups for your Application Load Balancer.
I use cert-manager
and letsencrypt
with the istio-ingressgateway
and NLBs with no TLS offload so all traffic as passed through to the istio-ingressgateway
NodePort
services for routing to the VirtualService
(s) - istio-ingressgateway is supposed to handle normal Ingress
objects but I haven’t managed to make that work yet.
cert-manager
and letsencrypt
use Route53 to auth the domain for issuing certificates.
letsencrypt
is pretty limited in terms of issuing rates for domains (I use wildcard certs), zerossl.com is a bit more user friendly in terms of rate limits (there are none) but if you want wildcard certs, it’s time to start paying
Thanks for the comments. If i’m currently using an nlb with tls termination - the traffic between the nlb and istio ingress is unencrypted, but then gets encrypted in the ingress using mTLS to the pods. Am I correct?
we use a classic elb with an istio-ingressgateway and set the backend protocol to https
we did try to do a tls passthrough but couldn’t get that to work, so in our current setup tls is still terminated at the elb, but traffic from the elb to the hosts is still encrypted
@tomv how’s the traffic from the elb to the hosts encrypted? with what key does that get encrypted?
we have a tls cert in the ingressgateway namespace signed by the k8s ca
the elbs don’t do cert verification to the hosts so it’s just a regular self signed cert
so between elb and ingress gateway the traffic is plaintext?
no
https
ohhh got it. nice!
do you think that’s possible also for tcp traffic? (tls), but not https?
from what i’ve seen it seems possible from an nlb, but thats only having read the docs. havent tried it myself.
2020-12-09
2020-12-10
2020-12-12
Friday afternoon team project running @Quake on @kubernetesio turned into Saturday morning getting it running with @Linkerd . Thanks @CapitalOneTech for https://github.com/criticalstack/quake-kube https://pbs.twimg.com/media/EnXNm9yW8AM_9BB.jpg
2020-12-14
I am tightening up permissions on my EKS cluster (1.17) for my devs to manage k8, both in a read-only as well as more of an admin role, but I’m having difficulty finding the right policies to allow k8 mgmt. Can anybody point me in the right direction to help me write the policies I need for my users to talk to k8s? Thanks!
AWS IAM Permissions are granting an IAM user rights to talk to an EKS Cluster (list, describe, get token, and things like that). It does authentication basically.
Now authenticated to the cluster, you are a k8s user. To call any k8s APIs (list deployments or whatever) you are using the k8s RBAC (Role Based Access Control). It’s the authorization part.
https://aws.amazon.com/premiumsupport/knowledge-center/eks-iam-permissions-namespaces/ explains it a bit
Thanks, @Vlad Ionescu (he/him)! The config I inherited adds about 5 admins to the aws-auth
configmap, which are accessed via roles in the AWS subaccounts. I’m breaking up the roles on the path to least-privilege access, so if I read this page right, I can create two new RBAC groups (with supplied YAML) and add my users to the groups defined by them, instead of to system:masters
.
The aws-auth ConfigMap is applied as part of the guide which provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a sample Kubernetes application. It is initially created to allow your nodes to join your cluster, but you also use this ConfigMap to add RBAC access to IAM users and roles. If you have not launched nodes and applied the
2020-12-16
We have some AWS Lambda functions that I’d like to migrate to run in our k8s clusters (EKS). Has anyone done this and can offer toolchain recommendations? There seem to be a lot of options: OpenFaaS, Fission, Kubeless, …
hey @Alex Jurkiewicz, this looks interesting… I haven’t done this before, but would like to follow up if you run into any issue
@Alex Jurkiewicz I’m a fan of OpenFaas. You get most of the benefits of serverless and containers. You can run functions directly (no k8s), and there’s a marketplace of functions. The most impressive demo uses ML to auto-flag ranty Github Issues as bugs with very little code: https://github.com/openfaas/workshop/blob/master/lab5.md Please post what you find out!
Learn Serverless for Kubernetes with OpenFaaS. Contribute to openfaas/workshop development by creating an account on GitHub.
How do you run without k8s? Where does the compute run then?
My context is that I’m migrating an existing lambda app to a k8s environment
OpenFaas has multiple backends. K8s is the main one, there’s also Docker Swarm. There’s also a “run directly” option https://docs.openfaas.com/deployment/#faasd-serverless-for-everyone-else | faasd is OpenFaaS, reimagined without the complexity and cost of Kubernetes. It runs well on a single host with very modest requirements, and can be deployed in a few moments. Under the hood it uses containerd and Container Networking Interface (CNI)
OpenFaaS - Serverless Functions Made Simple
An industry-standard container runtime with an emphasis on simplicity, robustness, and portability
Container Network Interface - networking for Linux containers - containernetworking/cni
The above link also mentions using raw AWS ECS for the backend, but that doesn’t work yet. OpenFaas is about the serverless end of things, which might or might not be Kubernetes
2020-12-18
anyone have any kubernetes feature gates that you’ve turned on that you love and we all should know about?
I am deploying two apps, server + client. The client needs to configure a URL that points to server, is there a way I can use reference the svc?
The server has, it’s ports can be reached from $RELEASE-dgraph-alpha-$IDX.$RELEASE-dgraph-alpha-headless.$NAMESPACE.svc
2020-12-21
I’m not sure how to get this to work:
{{- if .Values.script.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "dgraph-lambda.fullname" . }}-config
labels:
{{- include "dgraph-lambda.labels" . | nindent 4 }}
data:
script.js: {{ .Values.script.script }}
{{- end -}}
This get me:
[ERROR] templates/config.yaml: unable to parse YAML: error converting YAML to JSON: yaml: line 13: mapping values are not allowed in this context
You’ll need to show the rendered output – it’s not clear what line 13 contains
That is all it shows.
Is there some way to render the yaml output so you can see it?
I wish.
Then you might need to perform a binary search to identify which part of your template has problems. I’m guessing it’s either
labels:
{{- include "dgraph-lambda.labels" . | nindent 4 }}
or
script.js: {{ .Values.script.script }}
if there is some interim rendering issue, such as at JSON, helm
communicates results through friendly stack trace, and shows you lines numbers in the JSON that it will not show.
if it’s the former, the indent number might be insufficient. If it’s the latter, it might be rendering a long string without appropriate escaping
You could try something like
script.js: |
{{ .Values.script.script }}
perhaps…
I found ultimately it is just that I needed to quote the output from script, so {{ .Values.script.script | quote }}
2020-12-22
Hey EKS folks — I’m finding a pretty consistent worker node downtime pattern: I have a worker node group of 4 and after an undefined number of days, the oldest worker node will go into an Unknown
state. The node will go into a [node.kubernetes.io/unreachable](http://node.kubernetes.io/unreachable): NoSchedule
+ [node.kubernetes.io/unreachable](http://node.kubernetes.io/unreachable): NoExecute
state, the Kubelet stops posting node status updates to EKS, and I can no longer seem to be able to access that particular node.
Has anyone seen this pattern? I’m just starting to look into it and figured it’d be quick to post about it here before I jump all the way down the rabbit hole.
My first thought would be disk space build up, but usually that stands out quickly and I haven’t seen it yet.
What is your AMI Version?
Or, do you roll your own AMI? If so, check the version of containerd
that’s installed.
If it’s 1.4.0
or if you use the AWS EKS AMI and it’s version: v20201112
- you have buggy nodes.
Yeah, I’m on 1.18.9-20201112
Will read up on the new version. Thanks for the pointer.
What happened: Since upgrading to AMI 1.16.15-20201112 (from 1.16.13-20201007), we see a lot of Pods get stuck in Terminating state. We have noticed that all of these Pods have readiness/liveness p…
Good stuff — Thanks @Tim Birkett!
No problem! Only knew because it sounded like what I was dealing with a few weeks ago
@tim.j.birkett’s suggestion seems like the most plausible. However @btai spent weeks chasing after a similar problem that turned out to be a bug in Sysdig that was causing kernel panics (if I recall correctly)
yeah we had a special case of sysdig causing kernel panics but the logs weren’t getting shipped fast enough via our logging agent (which is what i was initially relying on but understandable in hindsight cause kernel panic) the way i realized it was kernel panicking from sysdig is looking at the ec2 system logs in the aws console
Cool good to know. Just did the rolling update of the EKS AMIs, so I’ll let it bake for a bit and will check those sys logs via the console if I need another debugging point in the future. Thanks @Erik Osterman (Cloud Posse) @btai
Hi anyone have any step by step guides with easy to follow contents to setup a kube cluster?
There are tons of resources online. I suggest to use something like minikube or k3d to get started
KinD is my favorite for a quick local cluster
2020-12-23
Hey everyone, what do you guys use to secure access to internal resources (kube-dashboard, grafana, argo, etc). Just port-forward? VPN? I’m looking at exploring Pomerium, but just wondering how other people do it
Pomerium is an identity-aware access proxy. Contribute to pomerium/pomerium development by creating an account on GitHub.
Hadn’t seen pomerium before. Seems powerful!
My ideal, at least up until now, is Oauth2/OIDC either in the apps themselves or via a proxy if app doesn’t support it ..
Grafana has built in Oauth2 support. Argo has support for an OIDC provider. oauth2_proxy is what I’ve used to front other apps which have no built-in auth or only basic auth or which otherwise can rely on auth headers set by the proxy.
For apps which are 100% end-user web apps, or which are fine with being 100% behind auth I would use oauth2_proxy without blinking (or whatever equivalent these days, since it’s been a few years since I reevaluated).
If anyone else has good details around this I’m all ears (or ) since my concept is very old by now.
Pomerium is an identity-aware access proxy. Contribute to pomerium/pomerium development by creating an account on GitHub.
oauth2-proxy is still good. You can use it for “forward authentication” and get a similar flow to Pomerium as well.
We’ve been using oauth2-proxy for a couple years and it has been solid. Only nag is deployment per different auth group. Pomerium did like more feature full (auth groups? metrics and maybe traces?) Last time looked but can’t justify spending time to investigate and change.
Does anyone use Google Skaffold, https://tilt.dev/ or another developer iteration/productivity tool? I’m currently stumbling around with evaluating Tilt.
Kubernetes for Prod, Tilt for Dev
tilt looks interesting, I’ve recently been using telepresence.io for local development stuff.
Kubernetes for Prod, Tilt for Dev
I saw Tilt demoed last year at a Meetup at Replicated – it looked very impressive! https://www.youtube.com/watch?v=fWUd31TIEfY I’ve used Skaffold a little, it looks simple and direct if more limited
I at least got up and running with Tilt here on a non-special app. First thing I hated was that I had to write a kube yaml, so I went ahead and did a Grafana Tanka demo too, and then coerced Tilt to generate the basic service libsonnet file via jsonnet + json2k8s as an exercise (why can’t tanka replace the jsonnet
completely and allow generating it without a tanka “environment”?? )… Tilt works at least… not sure if useful for the other engineers yet. I’m not quite the daily target audience for it myself.
allow_k8s_contexts(['local', 'k3d-local', 'k3d-k3s-default', 'kind'])
watch_file('jsonnet-libs/beaconpush/beaconpush.libsonnet')
docker_build('beaconpush/beaconpush', 'beaconpush')
k8s_yaml(local('jsonnet -J lib -J vendor jsonnet-libs/beaconpush/beaconpush.libsonnet | json2k8s'))
k8s_resource(workload='beaconpush', port_forwards=[
port_forward(6050, name='frontend')
])
Update on the above. tk eval path/to/jsonnet
replaces jsonnet cli 1:1 while helping you with the jsonnet include paths! Pipe to json2k8s as before.
But I’m moving towards having each service always providing a “default” Tanka environment and am now using tk eval environment/default
in the Tiltfile.
I used skaffold and happy with it
Didn’t try Skaffold yet. On the surface it seemed like Tilt was one level up but I should probably compare before believing that at face falue.
Anything specific you could say about Skaffold that is particularly useful?
It can be use on developer desktop and cicd pipeline
2020-12-24
I know I know, I should be with family and such but I’m injured in bed with nothing to do so I’m playing with EKS, so the questions is to node_group or not to node_group
?(I’m new to this and I want to play with istio after I have the cluster running)
As to which node group use the manager node groups to start
do you guys use istio?
Managed node group is my preference
Istio is on the roadmap
2020-12-25
for those using Service meshes any pros and cons between istio and Gloo ( and maybe others?)?
I’m interesting by a feedback about Maesh too, its paradigm its totally different, no sidecars.
for what I gather Maesh does not run a sidecar and that limits a bit the mtls communication plug-in and adds one hop to every outside connection since uses a gateway for intercommunication of services which they say is faster than running the sidecard on every service
In my opinion the sidecard is so powerful than the overhead ( 3 ms????) is nothing I will worry about
we currently run ~~~2500 applications, it means ~~~00 pods and so 6000 containers, if you add sidecars for all of them, it means 8500 containers, it’s not a “small” change
that is quite a lot I guess in my world I will be running like 20 pods so that is why I did not see it as a problem
and how is the outgoing gateway they used affects you? thinking that every connection will go trough the proxy it will have to sized correctly etc
no service mesh for now, this is why I’m interesting about feedbacks
you might be interested by what we do, here an article from a colleague with whom I set up our CD : https://medium.com/qonto-way/how-we-scaled-our-staging-deployments-with-argocd-a0deef486997
How do we deploy a full environment composed of ~100 containers in around 3 minutes?
thanks @Issif
IMO - Istio has emerged as the industry leader, just like Kubernetes did a few years ago. Other tools may have novel enhancements that make them good in niche cases, but Istio will always have the user base and backing I’m looking for moving forward
2020-12-26
2020-12-28
for deploying apps in K8s using helm charts what are those recommended tools needed to create ( link, test ) etc you guys use? I’m new to this and I want to know what should I use to go from repo to infra to deploy ( CRDs and such) gitOps all the way basically
I haven’t settled for anything in prod but for my test clusters I absolutely love the simplicity of the k3s helm controller and its’ CRD.
https://github.com/k3s-io/helm-controller
Concourse helm install:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: concourse
spec:
chart: concourse
repo: <https://concourse-charts.storage.googleapis.com/>
valuesContent: |-
concourse:
web:
externalUrl: "<http://concourse.kube.local>"
prometheus:
enabled: "true"
ingress:
annotations:
kubernetes.io/ingress.class: traefik
forecastle.stakater.com/expose: "true"
forecastle.stakater.com/icon: "<https://avatars1.githubusercontent.com/u/7809479?s=400&v=4>"
forecastle.stakater.com/group: "control"
postgresql:
commonAnnotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
argocd.argoproj.io/sync-options: Prune=false
Contribute to k3s-io/helm-controller development by creating an account on GitHub.
interesting
One thing about it though.. any time you delegate a task to another controller - you need to monitor those executions for success or failures separately. There are definite benefits to expanding the chart on the client side I guess … not too bothered for dev at this point of course
I’m about to deploy a sprintboot app and I was thinking to create a helm char to deploy it
but I have no idea if it is the right way to go
since there is many ways to do the same thing
@jose.amengual If I understand your question correctly, the few that I know that are big in the space are:
- Helmfile — Check out #helmfile. This is the Cloud Posse way, so you can get as feel for it via their OSS stuff. If I were to start a new K8s project… I would think about this.
- Flux / Helm Operator — This is the WeaveWorks / defacto GitOps way. Flux is deployed into your cluster as pods and they watch your chart / release repos for your desired state. Helm Operator installs a CRD for HelmRelease files which 1-1 map your Helm chart deployments to that cluster. I use this pattern for a client in Prod. It’s pretty solid and I like it, but I haven’t used much else so take my opinion with a grain of salt. Worth looking into Flux v2 as well since that looks awesome and should be out of beta soon I would guess.
- ArgoCD — Another GitOps CD tool. I know the least about this one, but it seems it’s more UI based then Flux which is completely git based.
k3s’ helm-controller looks very similar to Helm Operator / a HelmRelease CRD manifest.
And flux watch the repo as a github app? Or via we hook?
I guess I can look that up in the docs
It watches / polls the repo with an SSH Key. So it does a git clone / pull every X minutes.
I guess my question is more like, : what is the proper and easier way to deploy apps in kubernetes
I was getting confused by istio but then I realized that if you are using helm then it will be just another file in the directory using istio instead of a lb etc
I definitely think Helm is the right answer from my less then 6 months in the game. It’s not perfect — the templates that it generates / documentation / best practices leave a lot to be desired, but it provides a solid way to create a templated app for your company that all services / projects need to conform to.
Check out Cloud Posse’s monochart. My client had already created a unique charts for 20+ services using helm generate
(or whatever the command is) and that is a horrible idea. Using a monochart is the way I would go in the future.
awesome, I will check that out
is this the right page https://github.com/cloudposse/charts/tree/master/incubator/monochart?
The “Cloud Posse” Distribution of Kubernetes Applications - cloudposse/charts
Yeah — That’s it.
It’s worth either using that and contributing back or a lot of people take that pattern and make one for their company.
mmm I’m confused
how do you run this, using help or another tool to encode to yaml first and then you install the yaml?
Using helm. It is a helm chart that includes other helm charts. See requirements.yaml - this chart depends on just one sub chart so it’s actually not that totally clear on what it demonstrates.
The basics is that you pass in your own “stack” values.yaml with override parameters for each of the included child helm charts. This has been a common pattern for a couple of years.
You could render to yaml first if you want to “review” the resulting diff before shipping to the cluster. I would consider using kapp if UX is important when/if starting out manually experimenting.
So same way that terraform modules work basically
One requirements.yaml(main.tf) that instantiates other charts (modules) and there is overrides (variables.tf) etc…as an analogy
2020-12-29
Anyone here knows about Kubernetes deployments with helm in air-gapped systems?
What will be the recommended way when using EKS cluster for lets say for CD/CD or Control plane management and yo wanted to keep the ingress in a private subnet, will that work? ( we keep our CI/CD systems behind vpn and since I was playing with ArgoCD I was using the port-forwarding option)
For ingress on private subnet do the ALB or NLB on private subnets. We do that. And we also limit the Kubernetes API endpoint to vpc internal only.
Good question for #office-hours
I will not be in office hours but I will like to know the opinions that people have
I’m actually in the same boat. @jose.amengual are you deploying a cluster just for argocd? I’m exploring the idea of just having a cluster dedicated to running argocd that connects to everything else. As for ingress i’m following the suggestion that mfridh mentioned above
Cool, let’s discuss today
Shoutout to Teleport: https://goteleport.com/teleport/application/
Quick access to web applications running behind NAT and firewalls with security and compliance
Another way to skin this cat — Haven’t implemented it yet, but I’m planning on skipping a BeyondCorp tool and doing VPN the modern way with WireGuard via Tailscale: https://tailscale.com/
Tailscale is a zero config VPN for building secure networks. Install on any device in minutes. Remote access from any network or physical location.
https://github.com/slackhq/nebula is growing too in that space
A scalable overlay networking tool with a focus on performance, simplicity and security - slackhq/nebula
i’m curious to see what others recommend here, too.. i was chatting with someone last night about https://www.mysocket.io/ , https://github.com/inlets/inlets-operator , and https://ngrok.com/
Add public LoadBalancers to your local Kubernetes clusters - inlets/inlets-operator
ngrok secure introspectable tunnels to localhost webhook development tool and debugging tool
I’m trying to match the environment we have since I think I a good idea to isolate the environments but I don’t want something cumbersome to use
The problem that I have with wireguard is that is still beta? Or less than 1.0? Isn’t it? But at any rate is too new to be in the list of accepted/approved vpn server for bigcorp companies so we could fail a certification if that is the case
thanks for all the recommendations in office hours
@Matt Gowie I was reading the doc of tailscale and I saw this : https://tailscale.com/kb/1021/install-aws which is basically similar to installing a vpn server but the UI lives on a saas interface
and this is really painful CloudFlare Access example https://developers.cloudflare.com/access/protocols-and-connections/kubectl
Welcome to Cloudflare Access. You can now make all your applications available on the internet without a VPN. Access protects these applications and allows only authorized users to access them. For example, Cloudflare uses Access to ensure only people at Cloudflare can access internal tools like our staging site.
Cloudflare access is nice for sites that are public and that only allow cloudflare access ip to enter
if they are internal, then the tunnel nightmare starts
Teleport is similar too, requires a proxy, client, a local tunnel like command etc
I’m not convince this solutions are better than a VPN server when you have different protocols more than different applications
When I see the instructions I can’t stop thinking : “ I will need to write new docs for people to use it, I will need to try to terraform the setups off this, I will need to add all my apps that are public and policies for it, I will need to add guides for ssh, kubectl, I will need to add a proxy for services in private subnets etc….” and the only thing I get is a nice saas management SSO integrated UI
sorry to sounds so negative, but I looked at the 3 products docs and they are all very similar in complexity for setting up within private subnets
maybe I missing something very obvious
which is base in wireguard too
I completed a POC of StrongDM - it was really easy to setup, but the pricing is a bit rough IMO
Wow — looks cool, but yeah that’s some bold pricing.
pricing was my only gripe, it did everything it said it would do and the setup was simple. but being that there are a lot of other options in the ZTNA/SDP space that are $5-20 per user
2020-12-30
What would be a right approach most effective open-source for running Kubernetes on KVM, Hypervisor, LXC in house on a home lab?
I know for sure if I ask one of my friends he will say https://www.proxmox.com/en/ and I’m starting to believe him
Proxmox develops the open-source virtualization platform Proxmox VE and the Proxmox Mail Gateway, an open-source email security solution to protect your mail server.
Or hmm. Maybe VMs wasn’t what you asked for
Does it have to be on prem? I believe you can run k8s cheaply on Digital Ocean.
@mfridh yeah, I’ve heard only positive feedbacks about Proxmox from community. I should learn more then. Yep, could be VMs or lightweight LXC Thanks
@Joe Niland I deployed before a few tools on DO’s Kubernetes cluster and still using their droplets. It’s pretty easy to manage but I need more resources for now to handle different tasks with smaller cost, but pain in a butt lol
@Joe Niland Asus 1151 Z270, i7, 16Gb DDR4, SSD512Gb, Nvidia Geforce 1070. Ubuntu 20.04, 100Mb internet bandwidth.
I’ll add + 16Gb RAM I guess
I use the terraform libvirt provider for creating kvm hosts. Then used to use kubespray for k8s install on vms plus separate bare metal arm SBC. Now dumped the arm stuff and looking at various k8s projects can do with or close to terraform. Currently investigating kinvolk lokomotive.
@kskewes sounds awesome, mate. I’d love to use Terraform with libvirt or lxc locally, but I don’t have expertise provisioning on-premise yet only on clouds. Could you share your code pls to try it out? What is your thought about lxc don’t you think it’s way effective to use on Linux?
Looks like I might not have pushed in a month or so but it’s here. https://github.com/kskewes/home-lab/tree/master/terraform I provision a matchbox pxe server and then pxe boot bunch of vms for k8s. The provider itself has various examples.
I haven’t used lxc before.
Create and maintain a multi-arch Kubernetes cluster utilizing Gitlab CI/CD tools where possible. - kskewes/home-lab
Yeah, drops this error. Looks like the provider is supplied fine, I assume it doesn’t need any change. Running on a host machine
terraform init -upgrade
Upgrading modules...
- k8s_controllers in ../modules/netboot
- k8s_workers in ../modules/netboot
Initializing the backend...
Initializing provider plugins...
- Finding dmacvicar/libvirt versions matching "0.6.3"...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
dmacvicar/libvirt: provider registry registry.terraform.io does not have a
provider named registry.terraform.io/dmacvicar/libvirt
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
Did you install the provider binary? There are specific instructions for each os. We are working on a pure go version of binary to make it automatic on init. PR wip.
Also I run a small tfvars file that isn’t committed to git so defaults may not be tested always.
As far as I know it should be installed automatically after terraform init
?
I’ll add tfvars then thanks for letting me know
There are specific install instructions for the provider you will need to do for now until above pr merged.
Thinking about it I haven’t done my home lab repo for others to consume without good understanding of what’s described so perhaps it’s easier to work from the provider plugin examples?
Yeah, sounds more rational
But if your continue with this and circle back to my repo and have questions about what I’ve done do hit me up. Be useful for me too.
My basic structure is:
- Bring up matchbox pxe host VM.
- Start lokomotive or similar k8s provisioning system which loads config into matchbox over its API.
- Bring up k8s nodes as raw vms with network boot. (Ie from matchbox)
- Fetch kubeconfig and interact with cluster.
Currently lokomotive bare metal is stuck and I need to have another go and update upstream issue.
Jeez, I have to learn those tools to understand what’s going on lol. Have never heard about matchbox, however, someone recently mentioned about lokomotive tool at least ahah :))
Yeah it’s a lot of work. What area of the stack do you want to learn about?
Currently learning k8s to migrate from DO cloud to my own small k3s Rancher home-lab as I’ve explained earlier. BTW one nice engineer suggested me to spin up minikube or k3s and focus more on k8s’ API rather than kind of waste my time. So I decided, this guy is experienced, therefore he knows the shit let’s cut that pain off we’ve been trying to solve out for a few days with my previous setup lol
Migrating bunch of Wordpress websites in prod, Jenkins for testing. Yet, we need a private GitLab Ci for a test repo and run some pipelines, backups etc.
Nothing fancy for now lol
By any chance, does anyone here have a multi-region kubernetes setup that still uses wildcard DNS? I have a single cluster with hundreds of ingresses like foo.example.com or bar.example.com and I had been thinking about moving to a multi-region setup where half of the ingresses would live in us-east and half in us-west, but would like to keep the wildcard dns setup as to not need to create a bunch of route53 records. I can’t use Route53 geo-based routing as users that have their site hosted in us-east could be accessing their site from a different location (i.e. california). To clarify, the reason that I want to add a cluster in a second region is to minimize blast radius and not for redundancy (foo.example.com would only live on the us-east cluster OR the us-west cluster but not both)