#kubernetes (2020-12)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2020-12-01

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Spot support in Managed Node Groups for EKS: https://aws.amazon.com/blogs/containers/amazon-eks-now-supports-provisioning-and-managing-ec2-spot-instances-in-managed-node-groups/

^ I know this was discussed here a couple of times with people saying it was a blocker

Amazon EKS now supports provisioning and managing EC2 Spot Instances in managed node groups | Amazon Web Servicesattachment image

This post was contributed by Ran Sheinberg, Principal Solutions Architect and Deepthi Chelupati, Sr Product Manager Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to run upstream, secure, and highly available Kubernetes clusters on AWS. In 2019, support for managed node groups was added, with EKS provisioning and managing the underlying EC2 Instances (worker […]

tim.j.birkett avatar
tim.j.birkett

I’ve always ignored managed node groups because of the lack of spot support… does anyone use managed node groups with custom CNI configuration? Do managed nodes come with SSM out of the box?

Amazon EKS now supports provisioning and managing EC2 Spot Instances in managed node groups | Amazon Web Servicesattachment image

This post was contributed by Ran Sheinberg, Principal Solutions Architect and Deepthi Chelupati, Sr Product Manager Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to run upstream, secure, and highly available Kubernetes clusters on AWS. In 2019, support for managed node groups was added, with EKS provisioning and managing the underlying EC2 Instances (worker […]

2020-12-02

Zachary Loeber avatar
Zachary Loeber

https://get-kbld.io/ -> this and all the carvel tooling may be interesting to keep an eye on.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Will you be on #office-hours today?

Zachary Loeber avatar
Zachary Loeber

Sorry I so wanted to be

Zachary Loeber avatar
Zachary Loeber

Almost done with a spurt of work that should make me feel like things are mostly over a hump

Zachary Loeber avatar
Zachary Loeber

I promise to derail and otherwise chaosmonkey up your office hours again soon

1

2020-12-04

tim.j.birkett avatar
tim.j.birkett

Stupid question here… When an image is pulled by the Kubelet, is this done with the default service account or is it done with the whatever service account is specified on the pod (default when there is nothing specified on the pod)? I’m wondering if all service accounts need image pull secrets setting or just the default service accounts

tim.j.birkett avatar
tim.j.birkett

Stupid answer here… https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-image-pull-secret-to-service-account - the docs suggest that pods are patched with imagePullSecrets from the service account the pod is using, but it isn’t explicit, and lead me to initially believe that patching the default sa was all that was needed… That is not true.

After some testing, I have found that all service accounts need patching with imagePullSecrets for them to be applied to the pods using those service accounts.

This patching is done by an admission controller and the code responsible for patching is here: https://github.com/kubernetes/kubernetes/blob/c6f7fbcfbc69120934ed87c5ac701bd1890347a3/plugin/pkg/admission/serviceaccount/admission.go#L193-L198

tim.j.birkett avatar
tim.j.birkett
11:30:34 AM

Does anyone else feel that kube-system gets overused sometimes? What are people’s strategies for installing system related tools like cluster-autoscaler, kube-downscaler and other operators / controllers? Single namespace? Namespace per controller? Something else?

roth.andy avatar
roth.andy

I tend to default to using a new namespace, it makes it easier to keep things organized.

btai avatar

+1 new namespace

1
kskewes avatar
kskewes

For sure. Try keep everything out

Jurgen avatar

yeah, I put nothing in there that doesn’t need to be in there

Jurgen avatar

same as default

mark340 avatar
mark340

I keep simple cluster operations tooling in kube-system. Exceptions are operators and controllers with a great deal of Kubernetes objects. Each to its own.

2020-12-08

Craig Dunford avatar
Craig Dunford

I am working on the upgrade implementation for a legacy application we host in kubernetes. Part of the upgrade procedure is going to require manipulation of k8s resources (configmaps, potentially ingress resources) at strategic points during the upgrade lifecycle. I am planning on using helm hooks running Jobs to do this; my question/concern is: is bad practice to have a pod manipulating k8s resources? If it’s not, what is the best way to accomplish it - just have kubectl available within the pod?

Jonathan Marcus avatar
Jonathan Marcus

We currently build our product on AWS and we’re looking to also support GCP. We use ECS backed by EC2, and using GCP means moving to K8s. I know it’ll be a lot of work (a lot) so we first want to get a 10,000-ft view by mapping all our current AWS concepts to their GCP/K8s equivalents.

Anybody have pointers to useful guides on this conversion?

roth.andy avatar
roth.andy

IMO, move to k8s first while staying on AWS. Kubernetes is a fantastic abstraction layer.

1
roth.andy avatar
roth.andy

Once you are running on kubernetes, the location of where that kubernetes is running doesn’t mean as much

roth.andy avatar
roth.andy

What do you mean by “support GCP”?

roth.andy avatar
roth.andy
Multi-Cloud is the Worst Practice - Last Week in AWSattachment image

Multi-cloud (that is, running the same workload across multiple cloud providers in a completely agnostic way) is absolutely something you need to be focusing on—at least, according to two constituencies: Declining vendors that realize that if you don’t go multi-cloud, they’ll have nothing left to sell you. AWS isn’t going to build a multi-cloud dashboard, […]

Jonathan Marcus avatar
Jonathan Marcus

It’s just like how Databricks deploys into AWS and Azure. For enterprise clients we deploy into their VPC, and currently only AWS is supported. If we use k8s then that seems like a good way to support all major cloud providers.

Jonathan Marcus avatar
Jonathan Marcus

I like the idea to move to k8s first while staying on AWS. Good call

Ofir Rabanian avatar
Ofir Rabanian

I’m setting up Istio over eks. Wanted to ask what’s the best strategy to have an encrypted tls connection between a client outside the cluster and a pod (ingress). I’m managing certificates on AWS ACM and it seems that elb has support for that using annotations, but according to my understanding that’ll lead to an unencrypted traffic between the elb and istio gateway. Any opinion about that would be extremely helpful.

mfridh avatar

Not sure which part of your statements where actual questions so I’ll answer two of them; ELB supports ACM certificates, yes.

And ALB target groups can support both HTTP and HTTPS. If you truly require HTTPS, then you can be comfortable (or not? ) knowing no certificate validation is done, so you can install a self-signed cert any way you please.

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-routing-configuration

Target groups for your Application Load Balancers - Elastic Load Balancing

Learn how to configure target groups for your Application Load Balancer.

tim.j.birkett avatar
tim.j.birkett

I use cert-manager and letsencrypt with the istio-ingressgateway and NLBs with no TLS offload so all traffic as passed through to the istio-ingressgateway NodePort services for routing to the VirtualService (s) - istio-ingressgateway is supposed to handle normal Ingress objects but I haven’t managed to make that work yet.

tim.j.birkett avatar
tim.j.birkett

cert-manager and letsencrypt use Route53 to auth the domain for issuing certificates.

tim.j.birkett avatar
tim.j.birkett

letsencrypt is pretty limited in terms of issuing rates for domains (I use wildcard certs), zerossl.com is a bit more user friendly in terms of rate limits (there are none) but if you want wildcard certs, it’s time to start paying

Ofir Rabanian avatar
Ofir Rabanian

Thanks for the comments. If i’m currently using an nlb with tls termination - the traffic between the nlb and istio ingress is unencrypted, but then gets encrypted in the ingress using mTLS to the pods. Am I correct?

tomv avatar

we use a classic elb with an istio-ingressgateway and set the backend protocol to https

tomv avatar

we did try to do a tls passthrough but couldn’t get that to work, so in our current setup tls is still terminated at the elb, but traffic from the elb to the hosts is still encrypted

Ofir Rabanian avatar
Ofir Rabanian

@tomv how’s the traffic from the elb to the hosts encrypted? with what key does that get encrypted?

tomv avatar

we have a tls cert in the ingressgateway namespace signed by the k8s ca

tomv avatar

the elbs don’t do cert verification to the hosts so it’s just a regular self signed cert

Ofir Rabanian avatar
Ofir Rabanian

so between elb and ingress gateway the traffic is plaintext?

tomv avatar

https

Ofir Rabanian avatar
Ofir Rabanian

ohhh got it. nice!

Ofir Rabanian avatar
Ofir Rabanian

do you think that’s possible also for tcp traffic? (tls), but not https?

tomv avatar

from what i’ve seen it seems possible from an nlb, but thats only having read the docs. havent tried it myself.

2020-12-09

2020-12-10

2020-12-14

Eric Berg avatar
Eric Berg

I am tightening up permissions on my EKS cluster (1.17) for my devs to manage k8, both in a read-only as well as more of an admin role, but I’m having difficulty finding the right policies to allow k8 mgmt. Can anybody point me in the right direction to help me write the policies I need for my users to talk to k8s? Thanks!

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

AWS IAM Permissions are granting an IAM user rights to talk to an EKS Cluster (list, describe, get token, and things like that). It does authentication basically.

Now authenticated to the cluster, you are a k8s user. To call any k8s APIs (list deployments or whatever) you are using the k8s RBAC (Role Based Access Control). It’s the authorization part.

https://aws.amazon.com/premiumsupport/knowledge-center/eks-iam-permissions-namespaces/ explains it a bit

Eric Berg avatar
Eric Berg

Thanks, @Vlad Ionescu (he/him)! The config I inherited adds about 5 admins to the aws-auth configmap, which are accessed via roles in the AWS subaccounts. I’m breaking up the roles on the path to least-privilege access, so if I read this page right, I can create two new RBAC groups (with supplied YAML) and add my users to the groups defined by them, instead of to system:masters.

Managing users or IAM roles for your cluster - Amazon EKS

The aws-auth ConfigMap is applied as part of the guide which provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a sample Kubernetes application. It is initially created to allow your nodes to join your cluster, but you also use this ConfigMap to add RBAC access to IAM users and roles. If you have not launched nodes and applied the

2020-12-16

Alex Jurkiewicz avatar
Alex Jurkiewicz

We have some AWS Lambda functions that I’d like to migrate to run in our k8s clusters (EKS). Has anyone done this and can offer toolchain recommendations? There seem to be a lot of options: OpenFaaS, Fission, Kubeless, …

Hao Wang avatar
Hao Wang

hey @Alex Jurkiewicz, this looks interesting… I haven’t done this before, but would like to follow up if you run into any issue

johntellsall avatar
johntellsall

@Alex Jurkiewicz I’m a fan of OpenFaas. You get most of the benefits of serverless and containers. You can run functions directly (no k8s), and there’s a marketplace of functions. The most impressive demo uses ML to auto-flag ranty Github Issues as bugs with very little code: https://github.com/openfaas/workshop/blob/master/lab5.md Please post what you find out!

openfaas/workshop

Learn Serverless for Kubernetes with OpenFaaS. Contribute to openfaas/workshop development by creating an account on GitHub.

Alex Jurkiewicz avatar
Alex Jurkiewicz

How do you run without k8s? Where does the compute run then?

My context is that I’m migrating an existing lambda app to a k8s environment

johntellsall avatar
johntellsall

OpenFaas has multiple backends. K8s is the main one, there’s also Docker Swarm. There’s also a “run directly” option https://docs.openfaas.com/deployment/#faasd-serverless-for-everyone-else | faasd is OpenFaaS, reimagined without the complexity and cost of Kubernetes. It runs well on a single host with very modest requirements, and can be deployed in a few moments. Under the hood it uses containerd and Container Networking Interface (CNI)

Deployment - OpenFaaS

OpenFaaS - Serverless Functions Made Simple

containerd

An industry-standard container runtime with an emphasis on simplicity, robustness, and portability

containernetworking/cni

Container Network Interface - networking for Linux containers - containernetworking/cni

2
johntellsall avatar
johntellsall

The above link also mentions using raw AWS ECS for the backend, but that doesn’t work yet. OpenFaas is about the serverless end of things, which might or might not be Kubernetes

2020-12-18

btai avatar

anyone have any kubernetes feature gates that you’ve turned on that you love and we all should know about?

Joaquin Menchaca avatar
Joaquin Menchaca

I am deploying two apps, server + client. The client needs to configure a URL that points to server, is there a way I can use reference the svc? The server has, it’s ports can be reached from $RELEASE-dgraph-alpha-$IDX.$RELEASE-dgraph-alpha-headless.$NAMESPACE.svc

2020-12-21

Joaquin Menchaca avatar
Joaquin Menchaca

I’m not sure how to get this to work:

{{- if .Values.script.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "dgraph-lambda.fullname" . }}-config
  labels:
    {{- include "dgraph-lambda.labels" . | nindent 4 }}
data:
  script.js: {{ .Values.script.script }}
{{- end -}}

This get me:

[ERROR] templates/config.yaml: unable to parse YAML: error converting YAML to JSON: yaml: line 13: mapping values are not allowed in this context
Alex Jurkiewicz avatar
Alex Jurkiewicz

You’ll need to show the rendered output – it’s not clear what line 13 contains

Joaquin Menchaca avatar
Joaquin Menchaca

That is all it shows.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is there some way to render the yaml output so you can see it?

Joaquin Menchaca avatar
Joaquin Menchaca

I wish.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Then you might need to perform a binary search to identify which part of your template has problems. I’m guessing it’s either

  labels:
    {{- include "dgraph-lambda.labels" . | nindent 4 }}

or

  script.js: {{ .Values.script.script }}
Joaquin Menchaca avatar
Joaquin Menchaca

if there is some interim rendering issue, such as at JSON, helm communicates results through friendly stack trace, and shows you lines numbers in the JSON that it will not show.

Alex Jurkiewicz avatar
Alex Jurkiewicz

if it’s the former, the indent number might be insufficient. If it’s the latter, it might be rendering a long string without appropriate escaping

Alex Jurkiewicz avatar
Alex Jurkiewicz

You could try something like

script.js: |
  {{ .Values.script.script }}

perhaps…

Joaquin Menchaca avatar
Joaquin Menchaca

I found ultimately it is just that I needed to quote the output from script, so {{ .Values.script.script | quote }}

1

2020-12-22

Matt Gowie avatar
Matt Gowie

Hey EKS folks — I’m finding a pretty consistent worker node downtime pattern: I have a worker node group of 4 and after an undefined number of days, the oldest worker node will go into an Unknown state. The node will go into a [node.kubernetes.io/unreachable](http://node.kubernetes.io/unreachable): NoSchedule + [node.kubernetes.io/unreachable](http://node.kubernetes.io/unreachable): NoExecute state, the Kubelet stops posting node status updates to EKS, and I can no longer seem to be able to access that particular node.

Has anyone seen this pattern? I’m just starting to look into it and figured it’d be quick to post about it here before I jump all the way down the rabbit hole.

Matt Gowie avatar
Matt Gowie

My first thought would be disk space build up, but usually that stands out quickly and I haven’t seen it yet.

tim.j.birkett avatar
tim.j.birkett

What is your AMI Version?

tim.j.birkett avatar
tim.j.birkett

Or, do you roll your own AMI? If so, check the version of containerd that’s installed.

tim.j.birkett avatar
tim.j.birkett

If it’s 1.4.0 or if you use the AWS EKS AMI and it’s version: v20201112 - you have buggy nodes.

Matt Gowie avatar
Matt Gowie

Yeah, I’m on 1.18.9-20201112

Matt Gowie avatar
Matt Gowie

Will read up on the new version. Thanks for the pointer.

tim.j.birkett avatar
tim.j.birkett
Pods stuck in terminating state after AMI amazon-eks-node-1.16.15-20201112 · Issue #563 · awslabs/amazon-eks-ami

What happened: Since upgrading to AMI 1.16.15-20201112 (from 1.16.13-20201007), we see a lot of Pods get stuck in Terminating state. We have noticed that all of these Pods have readiness/liveness p…

Matt Gowie avatar
Matt Gowie

Good stuff — Thanks @Tim Birkett!

tim.j.birkett avatar
tim.j.birkett

No problem! Only knew because it sounded like what I was dealing with a few weeks ago

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@tim.j.birkett’s suggestion seems like the most plausible. However @btai spent weeks chasing after a similar problem that turned out to be a bug in Sysdig that was causing kernel panics (if I recall correctly)

btai avatar

yeah we had a special case of sysdig causing kernel panics but the logs weren’t getting shipped fast enough via our logging agent (which is what i was initially relying on but understandable in hindsight cause kernel panic) the way i realized it was kernel panicking from sysdig is looking at the ec2 system logs in the aws console

Matt Gowie avatar
Matt Gowie

Cool good to know. Just did the rolling update of the EKS AMIs, so I’ll let it bake for a bit and will check those sys logs via the console if I need another debugging point in the future. Thanks @Erik Osterman (Cloud Posse) @btai

1
Mr.Devops avatar
Mr.Devops

Hi anyone have any step by step guides with easy to follow contents to setup a kube cluster?

Christian avatar
Christian

There are tons of resources online. I suggest to use something like minikube or k3d to get started

roth.andy avatar
roth.andy

KinD is my favorite for a quick local cluster

https://kind.sigs.k8s.io/

2020-12-23

Christian avatar
Christian

Hey everyone, what do you guys use to secure access to internal resources (kube-dashboard, grafana, argo, etc). Just port-forward? VPN? I’m looking at exploring Pomerium, but just wondering how other people do it

pomerium/pomerium

Pomerium is an identity-aware access proxy. Contribute to pomerium/pomerium development by creating an account on GitHub.

1
mfridh avatar

Hadn’t seen pomerium before. Seems powerful!

My ideal, at least up until now, is Oauth2/OIDC either in the apps themselves or via a proxy if app doesn’t support it ..

Grafana has built in Oauth2 support. Argo has support for an OIDC provider. oauth2_proxy is what I’ve used to front other apps which have no built-in auth or only basic auth or which otherwise can rely on auth headers set by the proxy.

For apps which are 100% end-user web apps, or which are fine with being 100% behind auth I would use oauth2_proxy without blinking (or whatever equivalent these days, since it’s been a few years since I reevaluated).

If anyone else has good details around this I’m all ears (or ) since my concept is very old by now.

pomerium/pomerium

Pomerium is an identity-aware access proxy. Contribute to pomerium/pomerium development by creating an account on GitHub.

Douglas Clow avatar
Douglas Clow

oauth2-proxy is still good. You can use it for “forward authentication” and get a similar flow to Pomerium as well.

1
1
kskewes avatar
kskewes

We’ve been using oauth2-proxy for a couple years and it has been solid. Only nag is deployment per different auth group. Pomerium did like more feature full (auth groups? metrics and maybe traces?) Last time looked but can’t justify spending time to investigate and change.

mfridh avatar

Does anyone use Google Skaffold, https://tilt.dev/ or another developer iteration/productivity tool? I’m currently stumbling around with evaluating Tilt.

Tilt

Kubernetes for Prod, Tilt for Dev

tim.j.birkett avatar
tim.j.birkett

tilt looks interesting, I’ve recently been using telepresence.io for local development stuff.

Tilt

Kubernetes for Prod, Tilt for Dev

joey avatar

one of my peers is gungho on tilt, but i haven’t gone out of my way to use it yet

1
johntellsall avatar
johntellsall

I saw Tilt demoed last year at a Meetup at Replicated – it looked very impressive! https://www.youtube.com/watch?v=fWUd31TIEfY I’ve used Skaffold a little, it looks simple and direct if more limited

1
mfridh avatar

I at least got up and running with Tilt here on a non-special app. First thing I hated was that I had to write a kube yaml, so I went ahead and did a Grafana Tanka demo too, and then coerced Tilt to generate the basic service libsonnet file via jsonnet + json2k8s as an exercise (why can’t tanka replace the jsonnet completely and allow generating it without a tanka “environment”?? )… Tilt works at least… not sure if useful for the other engineers yet. I’m not quite the daily target audience for it myself.

allow_k8s_contexts(['local', 'k3d-local', 'k3d-k3s-default', 'kind'])

watch_file('jsonnet-libs/beaconpush/beaconpush.libsonnet')

docker_build('beaconpush/beaconpush', 'beaconpush')

k8s_yaml(local('jsonnet -J lib -J vendor jsonnet-libs/beaconpush/beaconpush.libsonnet | json2k8s'))
k8s_resource(workload='beaconpush', port_forwards=[
    port_forward(6050, name='frontend')
])
1
mfridh avatar

Update on the above. tk eval path/to/jsonnet replaces jsonnet cli 1:1 while helping you with the jsonnet include paths! Pipe to json2k8s as before.

But I’m moving towards having each service always providing a “default” Tanka environment and am now using tk eval environment/default in the Tiltfile.

Amit Karpe avatar
Amit Karpe

I used skaffold and happy with it

mfridh avatar

Didn’t try Skaffold yet. On the surface it seemed like Tilt was one level up but I should probably compare before believing that at face falue.

Anything specific you could say about Skaffold that is particularly useful?

Amit Karpe avatar
Amit Karpe

It can be use on developer desktop and cicd pipeline

2020-12-24

jose.amengual avatar
jose.amengual

I know I know, I should be with family and such but I’m injured in bed with nothing to do so I’m playing with EKS, so the questions is to node_group or not to node_group?(I’m new to this and I want to play with istio after I have the cluster running)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh no! Hope you recover quickly

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As to which node group use the manager node groups to start

jose.amengual avatar
jose.amengual

do you guys use istio?

Amit Karpe avatar
Amit Karpe

Managed node group is my preference

Amit Karpe avatar
Amit Karpe

Istio is on the roadmap

2020-12-25

jose.amengual avatar
jose.amengual

for those using Service meshes any pros and cons between istio and Gloo ( and maybe others?)?

Issif avatar

I’m interesting by a feedback about Maesh too, its paradigm its totally different, no sidecars.

jose.amengual avatar
jose.amengual

for what I gather Maesh does not run a sidecar and that limits a bit the mtls communication plug-in and adds one hop to every outside connection since uses a gateway for intercommunication of services which they say is faster than running the sidecard on every service

jose.amengual avatar
jose.amengual

In my opinion the sidecard is so powerful than the overhead ( 3 ms????) is nothing I will worry about

Issif avatar

we currently run ~~~2500 applications, it means ~~~00 pods and so 6000 containers, if you add sidecars for all of them, it means 8500 containers, it’s not a “small” change

jose.amengual avatar
jose.amengual

that is quite a lot I guess in my world I will be running like 20 pods so that is why I did not see it as a problem

jose.amengual avatar
jose.amengual

and how is the outgoing gateway they used affects you? thinking that every connection will go trough the proxy it will have to sized correctly etc

Issif avatar

no service mesh for now, this is why I’m interesting about feedbacks

Issif avatar

you might be interested by what we do, here an article from a colleague with whom I set up our CD : https://medium.com/qonto-way/how-we-scaled-our-staging-deployments-with-argocd-a0deef486997

How we scaled our staging deployments with ArgoCDattachment image

How do we deploy a full environment composed of ~100 containers in around 3 minutes?

1
jose.amengual avatar
jose.amengual

thanks @Issif

roth.andy avatar
roth.andy

IMO - Istio has emerged as the industry leader, just like Kubernetes did a few years ago. Other tools may have novel enhancements that make them good in niche cases, but Istio will always have the user base and backing I’m looking for moving forward

2020-12-26

2020-12-28

jose.amengual avatar
jose.amengual

for deploying apps in K8s using helm charts what are those recommended tools needed to create ( link, test ) etc you guys use? I’m new to this and I want to know what should I use to go from repo to infra to deploy ( CRDs and such) gitOps all the way basically

mfridh avatar

I haven’t settled for anything in prod but for my test clusters I absolutely love the simplicity of the k3s helm controller and its’ CRD.

https://github.com/k3s-io/helm-controller

Concourse helm install:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: concourse
spec:
  chart: concourse
  repo: <https://concourse-charts.storage.googleapis.com/>
  valuesContent: |-
    concourse:
      web:
        externalUrl: "<http://concourse.kube.local>"
        prometheus:
          enabled: "true"
      ingress:
        annotations:
          kubernetes.io/ingress.class: traefik
          forecastle.stakater.com/expose: "true"
          forecastle.stakater.com/icon: "<https://avatars1.githubusercontent.com/u/7809479?s=400&v=4>"
          forecastle.stakater.com/group: "control"
    postgresql:
      commonAnnotations:
          argocd.argoproj.io/compare-options: IgnoreExtraneous
          argocd.argoproj.io/sync-options: Prune=false
k3s-io/helm-controller

Contribute to k3s-io/helm-controller development by creating an account on GitHub.

jose.amengual avatar
jose.amengual

interesting

mfridh avatar

One thing about it though.. any time you delegate a task to another controller - you need to monitor those executions for success or failures separately. There are definite benefits to expanding the chart on the client side I guess … not too bothered for dev at this point of course

jose.amengual avatar
jose.amengual

I’m about to deploy a sprintboot app and I was thinking to create a helm char to deploy it

jose.amengual avatar
jose.amengual

but I have no idea if it is the right way to go

jose.amengual avatar
jose.amengual

since there is many ways to do the same thing

Matt Gowie avatar
Matt Gowie

@jose.amengual If I understand your question correctly, the few that I know that are big in the space are:

  1. Helmfile — Check out #helmfile. This is the Cloud Posse way, so you can get as feel for it via their OSS stuff. If I were to start a new K8s project… I would think about this.
  2. Flux / Helm Operator — This is the WeaveWorks / defacto GitOps way. Flux is deployed into your cluster as pods and they watch your chart / release repos for your desired state. Helm Operator installs a CRD for HelmRelease files which 1-1 map your Helm chart deployments to that cluster. I use this pattern for a client in Prod. It’s pretty solid and I like it, but I haven’t used much else so take my opinion with a grain of salt. Worth looking into Flux v2 as well since that looks awesome and should be out of beta soon I would guess.
  3. ArgoCD — Another GitOps CD tool. I know the least about this one, but it seems it’s more UI based then Flux which is completely git based.
Matt Gowie avatar
Matt Gowie

k3s’ helm-controller looks very similar to Helm Operator / a HelmRelease CRD manifest.

jose.amengual avatar
jose.amengual

And flux watch the repo as a github app? Or via we hook?

jose.amengual avatar
jose.amengual

I guess I can look that up in the docs

Matt Gowie avatar
Matt Gowie

It watches / polls the repo with an SSH Key. So it does a git clone / pull every X minutes.

jose.amengual avatar
jose.amengual

I guess my question is more like, : what is the proper and easier way to deploy apps in kubernetes

jose.amengual avatar
jose.amengual

I was getting confused by istio but then I realized that if you are using helm then it will be just another file in the directory using istio instead of a lb etc

Matt Gowie avatar
Matt Gowie

I definitely think Helm is the right answer from my less then 6 months in the game. It’s not perfect — the templates that it generates / documentation / best practices leave a lot to be desired, but it provides a solid way to create a templated app for your company that all services / projects need to conform to.

Matt Gowie avatar
Matt Gowie

Check out Cloud Posse’s monochart. My client had already created a unique charts for 20+ services using helm generate (or whatever the command is) and that is a horrible idea. Using a monochart is the way I would go in the future.

1
jose.amengual avatar
jose.amengual

awesome, I will check that out

jose.amengual avatar
jose.amengual
cloudposse/charts

The “Cloud Posse” Distribution of Kubernetes Applications - cloudposse/charts

Matt Gowie avatar
Matt Gowie

Yeah — That’s it.

Matt Gowie avatar
Matt Gowie

It’s worth either using that and contributing back or a lot of people take that pattern and make one for their company.

jose.amengual avatar
jose.amengual

mmm I’m confused

jose.amengual avatar
jose.amengual

how do you run this, using help or another tool to encode to yaml first and then you install the yaml?

mfridh avatar

Using helm. It is a helm chart that includes other helm charts. See requirements.yaml - this chart depends on just one sub chart so it’s actually not that totally clear on what it demonstrates.

The basics is that you pass in your own “stack” values.yaml with override parameters for each of the included child helm charts. This has been a common pattern for a couple of years.

You could render to yaml first if you want to “review” the resulting diff before shipping to the cluster. I would consider using kapp if UX is important when/if starting out manually experimenting.

jose.amengual avatar
jose.amengual

So same way that terraform modules work basically

jose.amengual avatar
jose.amengual

One requirements.yaml(main.tf) that instantiates other charts (modules) and there is overrides (variables.tf) etc…as an analogy

2020-12-29

jose.amengual avatar
jose.amengual

Anyone here knows about Kubernetes deployments with helm in air-gapped systems?

jose.amengual avatar
jose.amengual

What will be the recommended way when using EKS cluster for lets say for CD/CD or Control plane management and yo wanted to keep the ingress in a private subnet, will that work? ( we keep our CI/CD systems behind vpn and since I was playing with ArgoCD I was using the port-forwarding option)

mfridh avatar

For ingress on private subnet do the ALB or NLB on private subnets. We do that. And we also limit the Kubernetes API endpoint to vpc internal only.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good question for #office-hours

jose.amengual avatar
jose.amengual

I will not be in office hours but I will like to know the opinions that people have

1
venkata.mutyala avatar
venkata.mutyala

I’m actually in the same boat. @jose.amengual are you deploying a cluster just for argocd? I’m exploring the idea of just having a cluster dedicated to running argocd that connects to everything else. As for ingress i’m following the suggestion that mfridh mentioned above

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cool, let’s discuss today

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)
Application Access with Teleportattachment image

Quick access to web applications running behind NAT and firewalls with security and compliance

Matt Gowie avatar
Matt Gowie

Another way to skin this cat — Haven’t implemented it yet, but I’m planning on skipping a BeyondCorp tool and doing VPN the modern way with WireGuard via Tailscale: https://tailscale.com/

Best VPN Service for Secure Networks - Tailscaleattachment image

Tailscale is a zero config VPN for building secure networks. Install on any device in minutes. Remote access from any network or physical location.

mfridh avatar

https://github.com/slackhq/nebula is growing too in that space

slackhq/nebula

A scalable overlay networking tool with a focus on performance, simplicity and security - slackhq/nebula

joey avatar

i’m curious to see what others recommend here, too.. i was chatting with someone last night about https://www.mysocket.io/ , https://github.com/inlets/inlets-operator , and https://ngrok.com/

inlets/inlets-operator

Add public LoadBalancers to your local Kubernetes clusters - inlets/inlets-operator

ngrok - secure introspectable tunnels to localhost

ngrok secure introspectable tunnels to localhost webhook development tool and debugging tool

jose.amengual avatar
jose.amengual

I’m trying to match the environment we have since I think I a good idea to isolate the environments but I don’t want something cumbersome to use

jose.amengual avatar
jose.amengual

The problem that I have with wireguard is that is still beta? Or less than 1.0? Isn’t it? But at any rate is too new to be in the list of accepted/approved vpn server for bigcorp companies so we could fail a certification if that is the case

jose.amengual avatar
jose.amengual

thanks for all the recommendations in office hours

jose.amengual avatar
jose.amengual

@Matt Gowie I was reading the doc of tailscale and I saw this : https://tailscale.com/kb/1021/install-aws which is basically similar to installing a vpn server but the UI lives on a saas interface

jose.amengual avatar
jose.amengual

and this is really painful CloudFlare Access example https://developers.cloudflare.com/access/protocols-and-connections/kubectl

kubectl · Cloudflare Access docs

Welcome to Cloudflare Access. You can now make all your applications available on the internet without a VPN. Access protects these applications and allows only authorized users to access them. For example, Cloudflare uses Access to ensure only people at Cloudflare can access internal tools like our staging site.

jose.amengual avatar
jose.amengual

Cloudflare access is nice for sites that are public and that only allow cloudflare access ip to enter

jose.amengual avatar
jose.amengual

if they are internal, then the tunnel nightmare starts

jose.amengual avatar
jose.amengual

Teleport is similar too, requires a proxy, client, a local tunnel like command etc

jose.amengual avatar
jose.amengual

I’m not convince this solutions are better than a VPN server when you have different protocols more than different applications

jose.amengual avatar
jose.amengual

When I see the instructions I can’t stop thinking : “ I will need to write new docs for people to use it, I will need to try to terraform the setups off this, I will need to add all my apps that are public and policies for it, I will need to add guides for ssh, kubectl, I will need to add a proxy for services in private subnets etc….” and the only thing I get is a nice saas management SSO integrated UI

jose.amengual avatar
jose.amengual

sorry to sounds so negative, but I looked at the 3 products docs and they are all very similar in complexity for setting up within private subnets

jose.amengual avatar
jose.amengual

maybe I missing something very obvious

jose.amengual avatar
jose.amengual

this is another option https://pritunl.com/

1
jose.amengual avatar
jose.amengual

which is base in wireguard too

Darren Cunningham avatar
Darren Cunningham

I completed a POC of StrongDM - it was really easy to setup, but the pricing is a bit rough IMO

Matt Gowie avatar
Matt Gowie

Wow — looks cool, but yeah that’s some bold pricing.

Darren Cunningham avatar
Darren Cunningham

pricing was my only gripe, it did everything it said it would do and the setup was simple. but being that there are a lot of other options in the ZTNA/SDP space that are $5-20 per user

2020-12-30

organicnz avatar
organicnz

What would be a right approach most effective open-source for running Kubernetes on KVM, Hypervisor, LXC in house on a home lab?

mfridh avatar

I know for sure if I ask one of my friends he will say https://www.proxmox.com/en/ and I’m starting to believe him

Proxmox - Powerful open-source server solutionsattachment image

Proxmox develops the open-source virtualization platform Proxmox VE and the Proxmox Mail Gateway, an open-source email security solution to protect your mail server.

1
mfridh avatar

Or hmm. Maybe VMs wasn’t what you asked for

Joe Niland avatar
Joe Niland

Does it have to be on prem? I believe you can run k8s cheaply on Digital Ocean.

2
mfridh avatar

That won’t give you any /r/homelab points

2
1
Joe Niland avatar
Joe Niland

Touche

1
organicnz avatar
organicnz

@mfridh yeah, I’ve heard only positive feedbacks about Proxmox from community. I should learn more then. Yep, could be VMs or lightweight LXC Thanks

Joe Niland avatar
Joe Niland

@organicnz curious what hardware you’re planning to use

1
organicnz avatar
organicnz

@Joe Niland I deployed before a few tools on DO’s Kubernetes cluster and still using their droplets. It’s pretty easy to manage but I need more resources for now to handle different tasks with smaller cost, but pain in a butt lol

organicnz avatar
organicnz

@Joe Niland Asus 1151 Z270, i7, 16Gb DDR4, SSD512Gb, Nvidia Geforce 1070. Ubuntu 20.04, 100Mb internet bandwidth.

I’ll add + 16Gb RAM I guess

1
kskewes avatar
kskewes

I use the terraform libvirt provider for creating kvm hosts. Then used to use kubespray for k8s install on vms plus separate bare metal arm SBC. Now dumped the arm stuff and looking at various k8s projects can do with or close to terraform. Currently investigating kinvolk lokomotive.

1
organicnz avatar
organicnz

@kskewes sounds awesome, mate. I’d love to use Terraform with libvirt or lxc locally, but I don’t have expertise provisioning on-premise yet only on clouds. Could you share your code pls to try it out? What is your thought about lxc don’t you think it’s way effective to use on Linux?

kskewes avatar
kskewes

Looks like I might not have pushed in a month or so but it’s here. https://github.com/kskewes/home-lab/tree/master/terraform I provision a matchbox pxe server and then pxe boot bunch of vms for k8s. The provider itself has various examples.

I haven’t used lxc before.

kskewes/home-lab

Create and maintain a multi-arch Kubernetes cluster utilizing Gitlab CI/CD tools where possible. - kskewes/home-lab

cool-doge1
organicnz avatar
organicnz

Yeah, drops this error. Looks like the provider is supplied fine, I assume it doesn’t need any change. Running on a host machine

terraform init -upgrade
Upgrading modules...
- k8s_controllers in ../modules/netboot
- k8s_workers in ../modules/netboot

Initializing the backend...

Initializing provider plugins...
- Finding dmacvicar/libvirt versions matching "0.6.3"...

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider
dmacvicar/libvirt: provider registry registry.terraform.io does not have a
provider named registry.terraform.io/dmacvicar/libvirt

If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
kskewes avatar
kskewes

Did you install the provider binary? There are specific instructions for each os. We are working on a pure go version of binary to make it automatic on init. PR wip.

kskewes avatar
kskewes

Also I run a small tfvars file that isn’t committed to git so defaults may not be tested always.

organicnz avatar
organicnz

As far as I know it should be installed automatically after terraform init?

organicnz avatar
organicnz

I’ll add tfvars then thanks for letting me know

kskewes avatar
kskewes

There are specific install instructions for the provider you will need to do for now until above pr merged.

Thinking about it I haven’t done my home lab repo for others to consume without good understanding of what’s described so perhaps it’s easier to work from the provider plugin examples?

1
organicnz avatar
organicnz

Yeah, sounds more rational

kskewes avatar
kskewes

But if your continue with this and circle back to my repo and have questions about what I’ve done do hit me up. Be useful for me too.

1
kskewes avatar
kskewes

My basic structure is:

  1. Bring up matchbox pxe host VM.
  2. Start lokomotive or similar k8s provisioning system which loads config into matchbox over its API.
  3. Bring up k8s nodes as raw vms with network boot. (Ie from matchbox)
  4. Fetch kubeconfig and interact with cluster.

Currently lokomotive bare metal is stuck and I need to have another go and update upstream issue.

1
organicnz avatar
organicnz

Jeez, I have to learn those tools to understand what’s going on lol. Have never heard about matchbox, however, someone recently mentioned about lokomotive tool at least ahah :))

kskewes avatar
kskewes

Yeah it’s a lot of work. What area of the stack do you want to learn about?

organicnz avatar
organicnz

Currently learning k8s to migrate from DO cloud to my own small k3s Rancher home-lab as I’ve explained earlier. BTW one nice engineer suggested me to spin up minikube or k3s and focus more on k8s’ API rather than kind of waste my time. So I decided, this guy is experienced, therefore he knows the shit let’s cut that pain off we’ve been trying to solve out for a few days with my previous setup lol

1
organicnz avatar
organicnz

Migrating bunch of Wordpress websites in prod, Jenkins for testing. Yet, we need a private GitLab Ci for a test repo and run some pipelines, backups etc.

organicnz avatar
organicnz

Nothing fancy for now lol

kskewes avatar
kskewes

100% - straight to kind or minikube or anything else.

cool-doge1
organicnz avatar
organicnz

Thanks bro

1
btai avatar

By any chance, does anyone here have a multi-region kubernetes setup that still uses wildcard DNS? I have a single cluster with hundreds of ingresses like foo.example.com or bar.example.com and I had been thinking about moving to a multi-region setup where half of the ingresses would live in us-east and half in us-west, but would like to keep the wildcard dns setup as to not need to create a bunch of route53 records. I can’t use Route53 geo-based routing as users that have their site hosted in us-east could be accessing their site from a different location (i.e. california). To clarify, the reason that I want to add a cluster in a second region is to minimize blast radius and not for redundancy (foo.example.com would only live on the us-east cluster OR the us-west cluster but not both)

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2020-12-31

    keyboard_arrow_up