#argocd (2021-02)

2021-02-01

Pierre-Yves avatar
Pierre-Yves
08:01:45 AM

@Pierre-Yves has joined the channel

Pierre-Yves avatar
Pierre-Yves

Thanks for creating this channel, I have started one week ago using it in test ;)

andrey.a.devyatkin avatar
andrey.a.devyatkin
03:19:10 PM

@andrey.a.devyatkin has joined the channel

MattyB avatar
MattyB
03:38:44 AM

@MattyB has joined the channel

2021-02-02

Pierre-Yves avatar
Pierre-Yves

I just made working ArgoCD RBAC with Azure Active Directory Groups and ArgoCD application registration in Azure has been made with terraform. I have used the ArgoCD documentation “Azure AD App Registration Auth using OIDC” https://argoproj.github.io/argo-cd/operator-manual/user-management/microsoft/#azure-ad-app-registration-auth-using-oidc

1
geertn avatar
geertn
09:14:23 AM

@geertn has joined the channel

johntellsall avatar
johntellsall
06:53:07 PM

@johntellsall has joined the channel

2021-02-03

Patrick Jahns avatar
Patrick Jahns

Quick question - when bootstrapping a kubernetes cluster (i.e. EKS) - how much needs to be added as service in order for argocd to be deployed, so it can take over the rest of the bootstrapping of the cluster? Are things like ingress etc. required - or can argo potentially do most of the heavy lifting so I don’t have to mange these things via a different tool (i.e. kubernetes or terraform).

Adam Blackwell avatar
Adam Blackwell

We have a lot of bits and pieces in Terraform currently and then set up vault (which has a secret zero problem related to how we use dynamodb) and cert manager via one off applies, ArgoCD does the rest, our ingresses use external-dns but the load balancers are still in Terraform for now.

3
Adam Blackwell avatar
Adam Blackwell

variable "eks_cluster_name" { type = string } variable "eks_azs" { type = list(string) } variable "eks_worker_cidrs" { type = list(string) } variable "eks_controller_cidrs" { type = list(string) } variable "eks_public_loadbalancer_cidrs" { type = list(string) } variable "eks_internal_loadbalancer_cidrs" { type = list(string) }

module "eks-network" { source = "../../modules/common/eks-network" cluster_name = var.eks_cluster_name worker_subnet_cidrs = var.eks_worker_cidrs controller_cidrs = var.eks_controller_cidrs public_loadbalancer_subnet_cidrs = var.eks_public_loadbalancer_cidrs internal_load_balancer_subnet_cidrs = var.eks_internal_loadbalancer_cidrs availability_zones = var.eks_azs

vpc_id = module.vpc.id vpn_gateway_ids = [module.vpc.vpn_gw_id] internet_gateway_id = module.vpc.igw_id }

module "eks-cluster" { source = "terraform-aws-modules/eks/aws" version = "12.2.0" cluster_name = var.eks_cluster_name cluster_version = "1.18" subnets = module.eks-network.controller_subnet_ids vpc_id = module.vpc.id manage_aws_auth = true write_kubeconfig = false cluster_enabled_log_types = ["audit"]

map_roles = [ { rolearn = "arn:aws:iam:::role/tools-admin" username = "arn:aws:iam:::role/tools-admin" groups = ["system:masters"] }, { rolearn = "arn:aws:iam:::role/tools-admin-atlantis" username = "arn:aws:iam:::role/tools-admin-atlantis" groups = ["system:masters"] }, ]

node_groups_defaults = { ami_type = "AL2_x86_64" disk_size = 100 }

node_groups = { aws-eks-node-group-001 = { desired_capacity = 3 max_capacity = 20 min_capacity = 3 subnets = module.eks-network.worker_subnet_ids instance_type = "m5.2xlarge" k8s_labels = { name = "${var.environment}-${var.deployment}-eks" environment = var.environment deployment = var.deployment cluster = "eks" } } } tags = { name = "${var.environment}-${var.deployment}-eks" environment = var.environment deployment = var.deployment cluster = "eks" }

}

data "aws_eks_cluster" "cluster" { name = module.eks-cluster.cluster_id }

data "aws_eks_cluster_auth" "cluster" { name = module.eks-cluster.cluster_id }

provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false version = "~> 1.10" }

resource "aws_iam_openid_connect_provider" "eks-cluster" { url = module.eks-cluster.cluster_oidc_issuer_url

client_id_list = [ "[sts.amazonaws.com](http://sts.amazonaws.com)", ]

thumbprint_list = [ "print1", "print2", ] }

module "eks_autoscaling_policy" { source = "../../modules/common/eks-autoscaling-policy" cluster_name = var.eks_cluster_name cluster_oidc_issuer_url = module.eks-cluster.cluster_oidc_issuer_url service_account_namespace = "cluster-autoscaler" service_account_name = "tools-cluster-autoscaler-aws-cluster-autoscaler" aws_account_id = var.aws_account_id }

# Specific routes

resource "aws_route" "route_k8s_workers_to_peer_vpcs" { route_table_id = element(module.eks-network.worker_route_table_ids, 0) destination_cidr_block = element(var.peer_vpc_cidr_blocks, count.index) vpc_peering_connection_id = element( aws_vpc_peering_connection.peering_connections.*.id, count.index, )

count = length(var.peer_vpc_ids) }

resource "aws_route" "route_k8s_workers_to_peer_vpcs_1" { route_table_id = element(module.eks-network.worker_route_table_ids, 1) destination_cidr_block = element(var.peer_vpc_cidr_blocks, count.index) vpc_peering_connection_id = element( aws_vpc_peering_connection.peering_connections.*.id, count.index, )

count = length(var.peer_vpc_ids) }

resource "aws_route" "route_k8s_workers_to_peer_vpcs_2" { route_table_id = element(module.eks-network.worker_route_table_ids, 2) destination_cidr_block = element(var.peer_vpc_cidr_blocks, count.index) vpc_peering_connection_id = element( aws_vpc_peering_connection.peering_connections.*.id, count.index, )

count = length(var.peer_vpc_ids) }

Adam Blackwell avatar
Adam Blackwell

I’d love to get feedback on how we could simplify that mess and suspect https://github.com/cloudposse/terraform-aws-eks-cluster could be useful sometime.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

TBeijen avatar
TBeijen

@Adam Blackwell Out of curiosity. We have loadbalancers via Terraform as well. Not using external-dns currently (external non-TF controlled DNS tool). I’m exploring this combination for future usage.

How well does external-dns work since the ingress objects won’t have the loadbalancer address in the status?

Adam Blackwell avatar
Adam Blackwell

I believe that would depend heavily on which ingress controller you’re using, but it works very well for us. Let me refresh my memory of how that’s set up.

Adam Blackwell avatar
Adam Blackwell

Ah, right, we’re still using classic load balancers and we have an ingress controller per LB w/ publishService: enabled: true set

We’re using https://kubernetes.github.io/ingress-nginx which looks like this: https://argoproj.github.io/argo-cd/operator-manual/ingress/#kubernetesingress-nginx but

https://github.com/bitnami/charts/tree/master/bitnami/external-dns

dependencies:
- name: external-dns
  alias: external-dns-public
  version: <>
  repository: <https://charts.bitnami.com/bitnami>
  condition: external-dns-public.enabled

https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx

dependencies:
- name: ingress-nginx
  alias: public-ingress
  version: <>
  repository: <https://kubernetes.github.io/ingress-nginx>
  condition: public-ingress.enabled
public-ingress:
  controller:
    ingressClass: public   # Only implement ingresses that are explicitly marked as public
    publishService: 
      enabled: true   # required for external-dns to map elb ingresses to dns names
    service:
      annotations: 
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "300"
    extraArgs:
      enable-ssl-passthrough: ""
    autoscaling:
        enabled: true
        minReplicas: 3
        maxReplicas: 11
    resources:
      limits:
        cpu: 100m
        memory: 250Mi
      requests:
        cpu: 50m
        memory: 250Mi
    admissionWebhooks:
      enabled: false
      patch:
        enabled: false
  defaultBackend:
    resources:
      limits:
        cpu: 100m
        memory: 250Mi
      requests:
        cpu: 50m
        memory: 250Mi
bitnami/charts

Helm Charts. Contribute to bitnami/charts development by creating an account on GitHub.

kubernetes/ingress-nginx

NGINX Ingress Controller for Kubernetes. Contribute to kubernetes/ingress-nginx development by creating an account on GitHub.

TBeijen avatar
TBeijen

Thx, will check it out. So far we’re coping fine with some wildcard dns entries for test envs. But there’s limits to what that can do so looking ahead a bit.

2021-02-04

2021-02-05

2021-02-06

2021-02-18

    keyboard_arrow_up