#kubernetes (2022-08)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2022-08-01

MSaad avatar

Hello wave Anyone know or can point me to where i could find how one can delete kuberenets resources based on Age? I’m trying to build a cron job that would delete old services, pods, jobs, configmaps of a specific namespace. So for example something that would get all pods that are 2 days old of a specific namespace and run a kubectl delete pods command based on that list?

Dag Viggo Lokoeen avatar
Dag Viggo Lokoeen

This is not something i’ve done myself, but https://stackoverflow.com/questions/48934491/kubernetes-how-to-delete-pods-based-on-age-creation-time seems to have a viable approach based kubectl and normal shell tools

Kubernetes: How to delete PODs based on age/creation time

Is it possible to delete POD in kubernetes based on creation time or age? Example : I would like to delete all PODs which are older than 1 day. These PODs are orphaned , therefore no new PODs wil…

roth.andy avatar
roth.andy

Does anyone have a working example that uses https://github.com/cloudposse/terraform-aws-eks-workers? The example in the repo isn’t a full end-to-end example, it just stands up the workers but doesn’t actually stand up a cluster

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers

roth.andy avatar
roth.andy

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Would like the ability to look at a working end-to-end example, so that I can start from a baseline of something that works as I build.

Expected Behavior

I expected that the examples in the “examples/complete” folder would include creation of an EKS cluster as well as worker nodes using this module. In actuality it creates a VPC, subnets, and the worker nodes, but no cluster.

I’m currently struggling to get this module working in my environment, and don’t have a working example to refer to.

Use Case

I’m trying to stand up an eks cluster using the cloudposse module, with workers using this module. I can’t use the managed node group module since I need to configure the worker nodes as dedicated tenancy.

I’m currently struggling to get it working. The cluster comes up fine, and the instances start fine, but they never show up as nodes in the cluster (e.g. kubectl get nodes returns nothing)

Describe Ideal Solution

A working example exists that I can use as a working baseline to build from

Alternatives Considered

Continue troubleshooting my setup without being able to refer to a working example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is an old release that uses the unmanaged workers (was used before AWS introduced managed node groups) https://github.com/cloudposse/terraform-aws-eks-cluster/blob/0.21.0/examples/complete/main.tf

provider "aws" {
  region = var.region
}

module "label" {
  source     = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0>"
  namespace  = var.namespace
  name       = var.name
  stage      = var.stage
  delimiter  = var.delimiter
  attributes = compact(concat(var.attributes, list("cluster")))
  tags       = var.tags
}

locals {
  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # <https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#base-vpc-networking>
  tags = merge(module.label.tags, map("kubernetes.io/cluster/${module.label.id}", "shared"))

  # Unfortunately, most_recent (<https://github.com/cloudposse/terraform-aws-eks-workers/blob/34a43c25624a6efb3ba5d2770a601d7cb3c0d391/main.tf#L141>)
  # variable does not work as expected, if you are not going to use custom ami you should
  # enforce usage of eks_worker_ami_name_filter variable to set the right kubernetes version for EKS workers,
  # otherwise will be used the first version of Kubernetes supported by AWS (v1.11) for EKS workers but
  # EKS control plane will use the version specified by kubernetes_version variable.
  eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
}

module "vpc" {
  source     = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.8.1>"
  namespace  = var.namespace
  stage      = var.stage
  name       = var.name
  attributes = var.attributes
  cidr_block = "172.16.0.0/16"
  tags       = local.tags
}

module "subnets" {
  source               = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.19.0>"
  availability_zones   = var.availability_zones
  namespace            = var.namespace
  stage                = var.stage
  name                 = var.name
  attributes           = var.attributes
  vpc_id               = module.vpc.vpc_id
  igw_id               = module.vpc.igw_id
  cidr_block           = module.vpc.vpc_cidr_block
  nat_gateway_enabled  = false
  nat_instance_enabled = false
  tags                 = local.tags
}

module "eks_workers" {
  source                             = "git::<https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.12.0>"
  namespace                          = var.namespace
  stage                              = var.stage
  name                               = var.name
  attributes                         = var.attributes
  tags                               = var.tags
  instance_type                      = var.instance_type
  eks_worker_ami_name_filter         = local.eks_worker_ami_name_filter
  vpc_id                             = module.vpc.vpc_id
  subnet_ids                         = module.subnets.public_subnet_ids
  associate_public_ip_address        = var.associate_public_ip_address
  health_check_type                  = var.health_check_type
  min_size                           = var.min_size
  max_size                           = var.max_size
  wait_for_capacity_timeout          = var.wait_for_capacity_timeout
  cluster_name                       = module.label.id
  cluster_endpoint                   = module.eks_cluster.eks_cluster_endpoint
  cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
  cluster_security_group_id          = module.eks_cluster.security_group_id

  # Auto-scaling policies and CloudWatch metric alarms
  autoscaling_policies_enabled           = var.autoscaling_policies_enabled
  cpu_utilization_high_threshold_percent = var.cpu_utilization_high_threshold_percent
  cpu_utilization_low_threshold_percent  = var.cpu_utilization_low_threshold_percent
}

module "eks_cluster" {
  source                       = "../../"
  namespace                    = var.namespace
  stage                        = var.stage
  name                         = var.name
  attributes                   = var.attributes
  tags                         = var.tags
  region                       = var.region
  vpc_id                       = module.vpc.vpc_id
  subnet_ids                   = module.subnets.public_subnet_ids
  kubernetes_version           = var.kubernetes_version
  local_exec_interpreter       = var.local_exec_interpreter
  oidc_provider_enabled        = var.oidc_provider_enabled
  enabled_cluster_log_types    = var.enabled_cluster_log_types
  cluster_log_retention_period = var.cluster_log_retention_period

  workers_role_arns          = [module.eks_workers.workers_role_arn]
  workers_security_group_ids = [module.eks_workers.security_group_id]
}

roth.andy avatar
roth.andy

Sweet. Thanks!

2022-08-04

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Cooper Nettiesattachment image

What is Kubernetes

2

2022-08-08

RO avatar

Hi Guys, for someone who needs to be ready to work in a Kubernetes env, knowing how to navigate in an existing env and or new environment, I understand there is tutorials and such, I am looking more for step by step guide with showing a real use case-real company environment, that would be great. Would someone give me a hand and guidance?

RO avatar

Sorry if it is too vague. I understand a lot of people already come here with something, the thing is I need something ASAP as I wont have a lot of time to test whats best.

2022-08-09

Igor M avatar

Does anyone use terratest or similar for doing end-to-end tests on kubernetes (for ex. helm chart updates)?

2022-08-10

Mocanu Marian avatar
Mocanu Marian

Hello everyone! I have a question on how to bind specific ports on a kubernetes cluster that runs inside a LXC using ingress. I’m trying to have Wazuh on a local server where I have a single-node k8s cluster. Wazuh is using ports like 1514, 1515, 1516 and I can’t use NodePort because the tool that needs to be installed on user machine is trying to connect to those ports. I’ve tride using upstream and tcp/udp-services but with no luck. When I try to create an ingress for a specific port all the time it’s pointing to port 80. I only was able to point the service to a different path IP*/something* . Any ideas?

Sean avatar

Likely dumb question from a kustomize noob.

Q: Is it possible to patch a kustomization before generation? (client-side)

Scenario: I have this in a ./base/kustomization.yaml :

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmCharts:
- name: my-service
  releaseName: my-service
  namespace: my-service

## ... other kustomizations

Which I want to use across many clusters, for example:

envs/dev/my-service/kustomization.yaml
envs/int/my-service/kustomization.yaml
envs/stg/my-service/kustomization.yaml
envs/prd/my-service/kustomization.yaml

But I want to alter the releaseName and possibly other values. Kustomize can set the global namePrefix or nameSuffix but that doesn’t replace the entire release name.

2022-08-30

Gabriel avatar
Gabriel

Is it possible to configure global basic auth for all ingresses without having to configure it for every ingress?

Gabriel avatar
Gabriel

cluster uses ingress-nginx-controller

MSaad avatar

Hello, does anyone know if there is a way to run some kubectl commands that would compare a deployed helm version running in GKE on your cluster with a values.yaml to lets say the latest released helm chart (external helm chart e.g. prometheus)?

Gabriel avatar
Gabriel

What you could do is: • Render the new version not yet deployed using helm template • Render the deployed version currently deployed using helm get manifest • Compare the two with e.g. the diff command

1
akhan4u avatar
akhan4u

There is also a plugin for helm https://github.com/databus23/helm-diff

databus23/helm-diff

A helm plugin that shows a diff explaining what a helm upgrade would change

    keyboard_arrow_up