#kubernetes (2019-10)
Archive: https://archive.sweetops.com/kubernetes/
2019-10-01
i am using terraform-aws-elasticsearch module in my stack and im loved it
from cloudposse repository congratulations to those involved!!
Awesome! We use that one all the time
It’s great with fluentd and k8s
yeah, so, i got some trouble, when kibana record the CNAME on route53, the path /_plugin/kibana
must not be part of the record.
there is a issue for it to fix: https://github.com/cloudposse/terraform-aws-elasticsearch/issues/14
When dns_zone_id is supplied, the module attempts to create a CNAME Route53 record for the domain's Kibana endpoints. These endpoints look like "xxx.<region>.es.amazonaws.com/_plugin…
must be just [vpc-sb-shared-elasticsearch-6m6ftgtu6n74l3dh3drw3vwmvq.us-east-1.es.amazonaws.com](http://vpc-sb-shared-elasticsearch-6m6ftgtu6n74l3dh3drw3vwmvq.us-east-1.es.amazonaws.com)
@Andriy Knysh (Cloud Posse) this looks like a bug
that’s odd though since we deploy this regularly
this is a feature
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
and
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
use the same domain name [testing.cloudposse.co](http://testing.cloudposse.co)
TestExamplesComplete 2019-07-28T22:37:01Z command.go:121: domain_hostname = es-test.testing.cloudposse.co
TestExamplesComplete 2019-07-28T22:37:01Z command.go:121: kibana_hostname = kibana-es-test.testing.cloudposse.co
we don’t add /_plugin/kibana
to it
we add it in the helmfiles
one of those could be removed since they point to the same thing
[es-test.testing.cloudposse.co](http://es-test.testing.cloudposse.co)
is the ES domain endpoint
right, but I think @ruan.arcega is saying the cname was created automatically with the /_plugin/kibana
which is wrong
[es-test.testing.cloudposse.co](http://es-test.testing.cloudposse.co) /_plugin/kibana
would be the Kibana URL
right, but look at his screenshot from route53
i see it. Maybe something is changed already in AWS. We deployed it last time a few months ago
so our DNS is pointing to the wrong output
should it be using domain_name
?
domain_name
is not URL
it’s just the name of ES domain
we have
vpc-xxx-xxxxx-elasticsearch-xxxx.eu-west-2.es.amazonaws.com/_plugin/kibana/
as CNAME and it’s working
(I mean AWS accepted the record before and accepting it now)
yea, so it’s accepting the record
but the record is still garbage
Type Domain Name Canonical Name TTL
CNAME kibana-elasticsearch.eu-west-2.xxx.xxx.io vpc-xxx-xxx-elasticsearch-xxxx.eu-west-2.es.amazonaws.com/_plugin/kibana/
resolution works too
but I agree since those are the same, one could be removed
2019-10-02
Does sweetops has any terraform module to create Kubernetes cluster using kops?
No they use kops from the cli to provision kubernetes.
That’s true, however they still set up a lot of dependent resources with terraform. See:
https://github.com/cloudposse/terraform-root-modules/tree/master/aws/kops
and
https://github.com/cloudposse/terraform-root-modules/tree/master/aws/kops-aws-platform
and there’s other modules in that same repo to assist kops with some stuff.
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
but correct, no automation of kops itself
Ya we haven’t automated kops because what kops does it does better than terraform
It’s purpose built for managing the lifecycle of the cluster with the business logic of how to do updates. Terraform is more like a bulldozer.
2019-10-03
2019-10-04
I got a tricky one for you peeps.. At a high level, I need a static IP (Elastic IP) in front of a k8s service or ing.
aws-alb-ingress-controller doesn’t help since ALBs can’t use EIPs out of the box.. (yes, you can put an NLB in front of it.. and have a lambda function keep the NLB target group up to date the ALB IPs.. https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/)
Using nlb
annotations in a svc is feature poor even with the latest version of EKS (k8s 1.14) and doesn’t properly attach EIPs to the NLB.
What else should I look at? Things that sound nice but I’ve never touched before (CRDs, Operators, etc..) could maybe help.. or not? What do you think?
Does it have to be an IP? Can it be a domain name? nginx-ingress controller works really well. Set up a domain in Route53 and use nginx-ingress controller, so your service is myservice.example.com, or whatever you want it to be.
Yeah, IP. Someone needs to whitelist our IP for an integration.
For inbound traffic @Ryan? As in the integration is going to PUSH to your IP?
@Cameron Boulton exactly
Huh. I agree with Pepe: Global Accelerator is probably your best bet.
interesting
lemme take a look at that.. haven’t heard of it
Alternatively.. I can use terraform to stand up an NLB + EIPs.. then use a lambda function or some code somewhere to constantly update the NLB target group with the results from kubectl get nodes
2019-10-05
This should be possible today using simple nginx ingress with the right annotations
it’s not available on k8s 1.14 which is the highest eks version
service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-eip-allocations: “eipalloc-07e3afcd4b7b5d644,eipalloc-0d9cb0154be5ab55d,eipalloc-0e4e5ec3df81aa3ea”
Ah so need to run a newer version of k8s not supported by eks
The issue points to the reported closed issue here : #63959 I tested this but its not working correctly and ingress is not respecting the annotations : I have hard time getting this working with NL…
Introduction In August 2016, Elastic Load Balancing launched Application Load Balancer (ALB), which enable many layer 7 features for your HTTP traffic. People use Application Load Balancers because they scale automatically to adapt to changes in your traffic. This makes planning for growth easy, but it has a side effect of changing the IP addresses […]
I referenced this one initially. It is an option i’m considering
Introduction In August 2016, Elastic Load Balancing launched Application Load Balancer (ALB), which enable many layer 7 features for your HTTP traffic. People use Application Load Balancers because they scale automatically to adapt to changes in your traffic. This makes planning for growth easy, but it has a side effect of changing the IP addresses […]
it’s pretty gnarly, but definitely last resort
I appreciate you sharing this
it is so much easier to use global accelerator
thank you, I’m taking a look. I haven’t heard of it before
2019-10-07
Yea, 80% of infra solutions are like this: people fall back on what they know and build these Rube Goldberg machines that have already been solved.
2019-10-09
Hey all, not sure if this belongs in this channel so please let me know if it’s not the place, but I just opened up a neat feature PR for the cloudposse/prometheus-to-cloudwatch
app - if anyone uses that and has some time to give some feedback I would really appreciate it, thanks! https://github.com/cloudposse/prometheus-to-cloudwatch/pull/28
Closes #27 This feature allows users to exclude a set of dimensions from metrics. It should be easy enough to add a dimensions whitelist as well, which seems to be in the style of this application,…
@Andriy Knysh (Cloud Posse) will review
@Austin Cawley-Edwards thanks for the contribution
2019-10-10
Awesome, thank you both!
cross posting from #security because it is relevant here: https://sweetops.slack.com/archives/CBXSAR45B/p1570720099000200
A new vulnerability has been discovered within the Kubernetes API. This flaw is centered around the parsing of YAML manifests by the Kubernetes API server. During this process the API server is open to potential Denial of Service (DoS) attacks. The issue (CVE-2019-11253 — which has yet to have any details fleshed out on the page) has been labeled a ‘Billion Laughs’ attack because it targets the parsers to carry out the attack.
This is why you always use a bastion host and isolate your cluster from everyone.
A new vulnerability has been discovered within the Kubernetes API. This flaw is centered around the parsing of YAML manifests by the Kubernetes API server. During this process the API server is open to potential Denial of Service (DoS) attacks. The issue (CVE-2019-11253 — which has yet to have any details fleshed out on the page) has been labeled a ‘Billion Laughs’ attack because it targets the parsers to carry out the attack.
how to encrypt passwords in helm values.yaml, any good documents is appreciated. Thanks
I assume you’re referring to helm’s values.yaml
right
I used helm secrets to make sure passwords are hidden when pushed to code repositories
I was not sure about helm get values
can you please let me know other startegies
@Erik Osterman (Cloud Posse) ^
@AG there’s the helm-secrets
plugin that tries to address this
but secrets will still be clear-text in the if you run helm get values
(which is why you just can’t pass any secrets via helm that you truly care about)
instead, the better pattern is to assume the secrets have been installed some other way…. basically assume the resource already exists and don’t provision with helm
then when you install the chart release, it will block until that secret exists.
there are a few strategies for populating secrets
basically, you want to decouple the lifecycle of secrets with the lifecycle of helm releases
2019-10-11
@Erik Osterman (Cloud Posse) Thanks
I’m trying to pass encrypted values to secrets and use them as variables, will that work?
{{ (tpl (.Files.Glob “configs/*“).AsSecrets . ) | indent 2 }} |
Hey all, trying to set up kops in a new environment set up with the reference-architectures repo, so right now trying to run kops-aws-platform
(https://github.com/cloudposse/terraform-root-modules/tree/master/aws/kops-aws-platform) and it seems it expects IAM roles like masters.us-west-2.testing.ryanjarv.sh
and nodes.us-west-2.testing.ryanjarv.sh
to be set up. Wondering if there is some step I missed that handles that.
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
those are provisioned by kops
Ok thanks will look into that. It did run ok but might need a more recent version or something.
Think I got it figured out, missed the extra steps here before. (https://github.com/cloudposse/terraform-root-modules/tree/master/aws/kops)
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
just so there’s no confusion we’re not using the terraform mode for kops
there are some other modules out there by others that do that
our module is for setting up the aws integration points that kops expects.
Terraform mode? Suppose I don’t know much to much about managing kops/k8s. Is that just managing individual pods with terraform? k8s in general still gets set up with the kops-aws-platform module right?
Edit: ok nvm seems the cluster itself is set up with kops.
2019-10-12
Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management - kubernetes/kops
this is what I was referring to
then there are some other terraform modules (not by us) that leverage this (i think)
2019-10-16
interested in thoughts - my thoughts are it sounds like it’s trying to separate dev and ops which i do not like
Learn from Docker experts to simplify and advance your app development and management with Docker. Stay up to date on Docker events and new version announcements!
Cloud Native Application Bundles facilitate the bundling, installing and managing of container-native apps — and their coupled services.
sort of - it seems more like a way to implement an abstraction layer between teams of dev/ops/infra teams. cnab feels like more of a packaging tool kit to me, where this feels more like enterprise service catalogish kind of stuff (insert hand-waving)
while i understand the pain that’s driving the need, i’m not sure i’d like to deal with an environment where that was required
i’m also a little sick of abstractions over the kube apis that just look like the kube apis
2019-10-18
Hey #kubernetes !
Just deployed k8s via the k8s-workers module, everything is working great. Being able to add iam users and roles via terraform is amazing.
Attempting to deploy a gitlab helm chart results in
Error creating load balancer (will retry): failed to ensure load balancer for service default/gitlab-nginx-ingress-controller: could not find any suitable subnets for creating the ELB
I used CloudPosse’s VPC, Subnets, EKS, local.tag and EKS Workers modules
I needed to add the var.tags to the subnet module
@Brandon Shutter thanks! Have you looked at this working example https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
and test for the example https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/test/src/examples_complete_test.go
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
I believe you are talking about these tags https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L19 (shared
is required by EKS)
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
2019-10-22
Hi guys,
Have just installed AWS EKS + autoscaller. All seem to be good except autoscaler failing with the error as follow:
E1021 18:40:49.320402 1 aws_manager.go:148] Failed to regenerate ASG cache: cannot autodiscover ASGs: RequestError: send request failed
caused by: Post <https://autoscaling.eu-west-2.amazonaws.com/>: dial tcp: i/o timeout
F1021 18:40:49.320431 1 aws_cloud_provider.go:330] Failed to create AWS Manager: cannot autodiscover ASGs: RequestError: send request failed
caused by: Post <https://autoscaling.eu-west-2.amazonaws.com/>: dial tcp: i/o timeout
Not sure why it can’t reach internal AWS’s API service.
Autoscaller has been successfully installed using helm
. Hence there is connectivity on the worker node.
Any advices of what else shall I check?
ok. Resolved. dnsPolicy changed to Default and that is it.
Now another issue is that new nodes can’t attach to the cluster:
27s Warning ScaleUpTimedOut configmap/cluster-autoscaler-status Nodes added to group londynek-02019102113431054090000000e failed to register within 5m5.36167321s
Ok. Resolved. Some subnets I put workers could not communicate to EKS cluster.
2019-10-23
does anyone have an elegant solution to applying the stupid eks aws-auth config map via terraform without using a public endpoint on eks (and without being inside the vpc)? - i’m pretty sure this is pretty much technically impossible
using atlantis running in the vpc (or peer vpc), you can accomplish it.
we run atlantis inside of ECS fargate for this reason
but if the requirement is to apply it without being inside and without being outside, maybe look into aws ssm agent?
yeh - it’s a frustrating requirement in that i want to be able to stand up the environment and hook up roles so that things within that environment can manage itself and connect everything - but i can’t set up access to the cluster without being able to connect to the cluster. it would be nice if eks could bootstrap the rbac config on cluster creation or you could pass through a cluster admin role arn rather than just granting system:master to the user that created the cluster
hopefully that’s on the roadmap somewhere
2019-10-25
Thanks @Erik Osterman (Cloud Posse) for the invite
Welcome @Jord!
Hey everyone! @Jord has a really neat product for learning kubernetes.
Clearly a lot of thought has gone into this.
Magic Sandbox is a hands-on learning platform for engineers, by engineers. Immersive Kubernetes training on real infrastructure where engineering teams learn from hands-on Kubernetes training on real infra.
Thanks for the shout out - if you have any Qs just DM me or mail me at [email protected]
I like MSB
2019-10-28
2019-10-29
Hi I need a help on creating configmap. resource “kubernetes_config_map” “env” { metadata { name =”tf-${var.project}-${var.component}-env” namespace = “${var.namespace}”
labels = { app = “tf-${var.project}-${var.component}” } }
data = { MINIO_ACCESS_KEY=”minio” MINIO_SECRET_KEY=”minio123” }
}
In the above I want to declare the values of data as variable and change it as per environment
.I am not able to declare it as string. Can someone please assist
variable “env_values” { type = string
} env_values = “MINIO_ACCESS_KEY="minio" \nMINIO_SECRET_KEY="minio123"”
I tried many possible combination but nothing works I tried using a file to declare all env variables and it worked but Minio is not picking the username in that way
Kindly give a suggestion