#kubernetes (2019-05)
Archive: https://archive.sweetops.com/kubernetes/
2019-05-02
2019-05-06
Has anyone made their GKE nodes static with Terraform?
I think I’m missing a step with this new eks cluster. applying the configmap keeps giving me an unauthorized.
The aws-iam-authenticator
call is working as expected so I have access, but applying the file does not work
error: You must be logged in to the server (the server has asked for the client to provide credentials)
has your session expired?
doesn’t aws-iam-auth create a new one every time?
i used the CP module to do this before and it is working fine still on that end
you using geodesic
? can you exit the shell, run it again, and assume role?
nopers. direct
trying in a fresh terminal
error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i...
works just fine
i’ve confirmed the configmap matches the first one i did
(sans names, of course)
maybe i need to update my core ~/.kube/config?
no go there
any debug tips?
ahhh….might be this:
This could be because the cluster was created with one set of AWS credentials (from an IAM user or role), and kubectl is using a different set of credentials.
I created it via CI
this was the issue?
I think so
This chapter covers some common errors that you may see while using Amazon EKS and how to work around them.
and the kubectl apply on CI failed because kc isn’t available on there (yet)
FYI @wbrown43 ^
confirmed. used the CI users creds and it worked as expected
@johncblandii you’ll need to update your authenticator configmap to allow other roles/users
just got through that part, @btai.
my last eks was a local install i did so i did not realize this was a rule
@johncblandii yeah took me half a day just trying to figure out how to get aws-iam-authenticator working and I ran into the same issues as you did haha
but no problems since
to which I spent the same half-day (while in 5 hours of meetings back-to-back-to-back)
now my nodes aren’t connecting so onto issue #3; lol
did you add the role for the worker nodes to your config map as well?
yup. the cp module does it
going to nix one and let the scaling kick off a fresh one now that the map is applied
so no public ip seems to have been the issue
started one w/ a public ip and voila
@johncblandii are you talking about public ip for your worker nodes?
yup
our other cluster didn’t have public, but when I upgraded them to 1.12 i had to do the same
(unsure if that’s related, but i did notice that)
@johncblandii fwiw, i didnt have to make my worker nodes public. im still on 1.11 but i cant imagine that would change in 1.12
i hear you. that’s just what i noticed when i moved to .12
I have a cluster in AWS in 3 availability zones, with 3 masters, but only 2 nodes. kops
put both nodes in the same AZ? Is this a bug? How do I get kops
to spread the nodes evenly across AZs?
it’s not a kops
thing
compare how the master node pools are created to how the worker node pools are created
that’s how to ensure more even distribution
AWS will make “best effort” to allocate instances evenly, but no guarantee
the only way to have a “guarantee” is to create node pools tied to exactly one AZ
precisely…
Yes, kops
creates an instance group per zone for the masters, but just 1 instance group for all the nodes.
So it turns out the bigger issue is that AWS autoscale group does launching and zone balancing separately, and to do zone balancing it has to launch a new instance before deleting the old one. Well, we had run up against our instance/type limit for the region, so it could not do zone balancing.
oh fascinating
good sluething
@wbrown43 has joined the channel
2019-05-07
Is there a clean way to get the security group created for an LB so I can assign it to the workers SG to approve traffic?
The LB is created through the helm deploy.
using this https://www.terraform.io/docs/providers/aws/d/security_group.html and query by filter
or tags
?
Provides details about a specific Security Group
A tool to white list node and developer IPs for kubernetes. - stakater/Whitelister
I would pursue a k8s native solution rather than trying to fuse terraform with helm
Also, IP whitelisting should be used as a last resort. Identity Aware Proxies is ala keycloak is a better approach
alb ingress controller creates you an ALB and the necessary security groups and assigns them to access your workers
@Andriy Knysh (Cloud Posse) there isn’t enough on the SG to query that way. it has the [k8s.io/](http://k8s.io/)…
tag, but it is not specific.
@Erik Osterman (Cloud Posse) this isn’t fusing helm and tf. it is the SG created by TF, but I’m mainly just adding an SG record so it is mainly AWS infrastructure networking.
if you go that route, you can filter
by name
(the resource has some name) and not tags
or add your own specific tag
I think I lack context of where you are trying to do this?
“Is there a clean way from XXXXXXX to get the security group created for an LB by ZZZZZZ so I can assign it to the workers SG in YYYYYYY to approve traffic?”
you may be technically right w/ fusing them. i’m technically wanting a value from k8s so i can configure the AWS SG to allow communication.
The SG is handled within TF manually
@johncblandii what type of LB are you using, if youre using an ALB I would suggest alb ingress controller as it does all that for you. (the downside is when you tear down your cluster, it wont clean up for you)
it automatically used a classic elb (helm install)
what helm chart @johncblandii
Twistlock
welp…bitten by the “providers cannot be dynamically initialized” issue
2019-05-08
2019-05-09
Dang, how do we get Curtis Mattoon into cloud posse slack? https://github.com/cmattoon/aws-ssm/pull/29
I didn’t see any other way to set the log level. So here it is!
This tool works pretty good. But just curious if you peeps have any other methods for dynamically added k8s secrets from SSM
not from SSM
have you seen @mumoshu’s ASM operator?
nope, I shall take a look-see
i think extending that to support SSM would be nice
you reminded me that we had the exact issue for it! https://github.com/mumoshu/aws-secret-operator/issues/14
AWS recently added the capability to increase throughput for SSM parameter store: https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-throughput.html Is there a chance aws-…
or creating a separte one
https://github.com/mumoshu/aws-secret-operator (for the others in the channel)
A Kubernetes operator that automatically creates and updates Kubernetes secrets according to what are stored in AWS Secrets Manager. - mumoshu/aws-secret-operator
Why not use AWS SSM Parameter Store as a primary source of secrets?
Pros:
Parameter Store has an efficient API to batch get multiple secrets sharing a same prefix.
Cons:
Its API rate limit is way too low. This has been discussed in several places in the Internet:
However, they just updated the rate limit to 1k req/s
so it might be a non-issue now
Also, you can set the limit and incur costs. Haven’t actually clicked this before.. lets see what happens
Ohhh, this is how to you get 1k: https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-throughput.html
You can increase the limit to 1,000 TPS on the Settings tab. Increasing the throughput limit incurs a charge on your AWS account.
$0.05 per 10,000 Parameter Store API interactions
k.. I’ll stop spamming
that’s great; didn’t know they increased the limit
I thought secretsmanager had the same amount of charge
secretmanager i think is $1/mo/secret. Lemme google a littttle
whoops.. $0.40/mo
PER SECRET PER MONTH
$0.40 per secret per month. For secrets that are stored for less than a month, the price is prorated (based on the number of hours.)
PER 10,000 API CALLS
$0.05 per 10,000 API calls.
Also good to cache your secrets, to avoid extra API calls and rate limits… https://aws.amazon.com/about-aws/whats-new/2019/05/Secrets-Manager-Client-Side-Caching-Libraries-in-Python-NET-Go/
this is interesting
curious how it works in detail. Like, does it make your microservice stateful? Or does it put the cache local to your cluster? Or is aws handling all the caching for us automagically?
The go SDK code looks straight forward though. Awesome find!
https://github.com/cmattoon/aws-ssm/pull/30 fixing a bug in aws-ssm
if anyone else was considering to use it
The Go SDK for GetParameterByPath limits to 10 values in the response. This should grab them all.
how does it look when you want many parameters?
kind: Secret
metadata:
name: my-secret
annotations:
aws-ssm/k8s-secret-name: my-secret
aws-ssm/aws-param-name: my-db-password
aws-ssm/aws-param-type: SecureString
e.g. /db/*
The Go SDK for GetParameterByPath limits to 10 values in the response. This should grab them all.
The name of the AWS SSM Parameter. May be a path.
i guess that answers it
but still curious. i never really kicked the tires on aws-ssm
(ultimately, client wanted per-service access controls so we went with Chamber +S3 + IAM + KIAM)
@Erik Osterman (Cloud Posse)
apiVersion: v1
kind: Secret
metadata:
name: my-secret-name
annotations:
aws-ssm/k8s-secret-name: my-secret-name
aws-ssm/aws-param-name: {{ .Values.ssm_path }}
aws-ssm/aws-param-type: Directory
data: {}
Where ` .Values.ssm_path == /directory/within/ssm`
Ah, thx!
(lol, sorry about the delay)
how’s the helmfile
PR coming along?
stale at the moment. been a bit busy. basically I didn’t consider multiple files
and there’s some chicken/egg issue about when the template-rendering happens and when to reference a file
so I just need to hit my head a little harder on it
maybe that will be simpler if they decouple the multi-phase rendering
Possibly. I thought multi-phase rendering was needed for template in template situations
2019-05-10
2019-05-11
Interesting approach for -> Deploying API Gateway in front of EKS / K8s Kops Clusters inside VPC private subnets And many other useful info about Integrating EKS with other AWS Services
2019-05-15
Free and Open Source GUI to Visualize Kubernetes Applications. - containership/konstellate
Thanks I like this. For #terraform there’s also https://github.com/camptocamp/terraboard
A web dashboard to inspect Terraform States - camptocamp/terraboard
Thanks for sharing @oscarsullivan_old! This looks really neat. You should share it in the #terraform channel.
A web dashboard to inspect Terraform States - camptocamp/terraboard
I am trying to setup kubernetes dashboard on AWS EKS cluster. I am able to setup the dashboard but facing a small issue with certs. I want to use aws certificate arn with the dashoard as an argument with command
kubectl apply -f <https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml>
is this possible?
2019-05-16
To anyone that is tempted to use t3a or m5a instances on an EKS cluster, don’t
What would you like to be added: Support for t3a, m5ad and r5ad instance types. Why is this needed: AWS had added new instance types, and the AMI does not currently support them.
We started ReactiveOps with a simple vision: transform infrastructure operations by leveraging decades of large-scale operations and product experience.
Scale on queue depth
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes - kedacore/keda
2019-05-17
Validation of best practices in your Kubernetes clusters - reactiveops/polaris
How do we generate a wildcard certificate using kubernetes kind:managedCertificate, trying with below method but not successful apiVersion: networking.gke.io/v1beta1 kind: ManagedCertificate metadata: name: example-certificate spec: domains: - *.example.net
Please let me know if there is any documentation/suggestions to create a wild card certificate with expiry date mentioned in it
^ Polaris looks really interesting… I’m going to try to get it going this weekend see if it’s useful… any thoughts on it yet if someones already set it up?
Couldn’t wait for the weekend testing it now… it offers some nice checks… I can see this becoming more and more useful as more checks/best practices are added …
2019-05-19
Hey all! I’m having an issue building my example-voting-app with Codefresh.
I added the variable for KUBE_CONTEXT but I keep getting an error that throws:
error: no context exists with the name: "gke_example-voting-app-240610_us-east1-c_example-votin
g-app".
[SYSTEM] Error: Failed to run freestyle step: Running Helm Upgrade; caused by NonZeroExitCodeEr
ror: Container for step title: Running Helm Upgrade, step type: freestyle, operation: Freestyle
KUBE_CONTEXT
should be the name of a kubernetes integration in codefresh
it would seldom, if ever have the app name in it
How to connect your Kubernetes cluster to the Codefresh dashboard
Got it thanks!
I ran kubectl get context
in my GKE shell and got:
gke_example-voting-app-240610_us-east1-c_example-voting-app
I put that as my KUBE_CONTEXT variable and can’t figure what I’m doing wrong. The docs say to put KUBE_CONTEXT as “Your friendly Kubernetes Cluster Name” I’ve also tried “example-voting-app” as the context variable. Which is the EKS cluster name. No dice there either.
2019-05-20
Can anyone help me with aws alb loadbalancer with helm chart ? Any samples that I can refer ?
Werf (previously known as dapp) helps to implement and support Continuous Integration and Continuous Delivery - flant/werf
2019-05-22
bye bye aws-iam-authenticator
finally
ergg.. i spent a good part of a day understanding how it works/getting it to work w/my eks cluster spun up in tf
Public/Free Office Hours with Cloud Posse starting now!!
anyone try federation yet?
2019-05-24
Hey all, I’ve a question and I can’t seem to find an answer. I’m running an AWS EKS cluster with two Nodes, each Node in EKS has a restriction of 20 Pods per Node. The Nodes are auto scaled and shut down each night and started in the morning since it’s just a test / staging system at the moment. However, one Node is always full (20/20 Capacity) while the other runs 4/20. We want to run a DaemonSet with filebeat for log aggregation but cannot ensure it runs on both nodes because one is full.
Is there a way I can (easily) ensure the DaemonSet is scheduled before all other pods? Or can I reserve a spot / space on a Node for a specific Pod, Deployment, or DaemonSet?
I would like to avoid configuration overhead. I’ve already read about Affinity and Anti-Affinity but I’m not sure if this can help me
Someone in the Kubernetes Slack answered my question, looks like this is it: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
Yes, this is what you want to look into.
2019-05-29
Hi all,
Is there anyone who has setup kubernetes dashboard on EKS using istio ingress gateway? I am facing some issues where my dashboard crash after 4 mins. I am not sure if its a good idea to use istio ingress gateway to run kubernetes-dashboard. Any help is appreciated
It is fixed now. I had to provide few configs in istio
@Vidhi Virmani how are you securing it?
(comment just to monitor response)
@Erik Osterman (Cloud Posse) I am currently allowing very few users to access the dashboard using aws-iam-authenticator
.
2019-05-31
Select which kubeconfig.yaml to use in an easy way. KCS means kubeconfig switcher. - claranet/kcs