@pericdaniel are you referring to EKS?
again, lots of really good/cool/interesting tools there too
thanks for the links @Max Moon
She’s awesome! I have been following her for years on twitter
She runs Coreos on the desktop
Has containerized everything
On an unrelated note…
Contribute to gofunct/cloudnative-engineer development by creating an account on GitHub.
Certified Kubernetes Administrator Exam Prep
i saw the eks tf files that were created
do we have anything for aws-config
eks project is being tested now and the modules will be updated to the latest version this week
what are you doing with
i can authorize specific users to be able to make changes to the clusters and deploy environments
@Daren see this: https://github.com/heptiolabs/eventrouter
(this was the project I was thinking of… came across it today by accident looking at heptio projects)
This is a place for various problem detectors running on the Kubernetes nodes. - kubernetes/node-problem-detector
node-problem-detector aims to make various node problems visible to the upstream layers in cluster management stack. It is a daemon which runs on each node, detects node problems and reports them to apiserver. node-problem-detector can either run as a DaemonSet or run standalone. Now it is running as a Kubernetes Addon enabled by default in the GCE cluster.
Works with draino
Draino automatically drains Kubernetes nodes based on labels and node conditions. Nodes that match all of the supplied labels and any of the supplied node conditions will be cordoned immediately and drained after a configurable drain-buffer time.
使用 AWS Service Broker 通过 Kubernetes 配置 AWS 服务 There’s no doubt that containers have changed how we build projects. One of the guiding principles of a containerized workflow approach has been to give back control to the developer, allowing them to choose their dependencies and how to consume them – most importantly, when they […]
“And that’s all folks” - wasn’t that easy? :P
Joking aside - pretty cool. Basically let’s you provision AWS backing services from within Kubernetes
@fdrescher has joined the channel
We’ve released our EKS terraform modules for Kubernetes this week.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
@stobiewankenobi has joined the channel
it looks clean!
@Erik Osterman (Cloud Posse) should post on Reddit if you don’t already
Thanks @rms1000watt yes - plan to do this soon
I think @Andriy Knysh (Cloud Posse) is doing some more testing today
Hi there, I was wondering whether the prometheus-to-cloudwatch solution can be adapted to scape metrics from the metrics server instead of kube-state-metrics.
ohh, the module was created just as an experiment, tested a little bit, and then forgotten (meaning not supported anymore b/c there many more
official solutions to do
@Erik Osterman (Cloud Posse) can explain the whole situation
thanks @Andriy Knysh (Cloud Posse). Can you direct me to where I can find those other solutions?
I saw an exporter for exporting CW metrics to prometheus but not the other way around.
https://github.com/prometheus/cloudwatch_exporter https://groups.google.com/forum/#!topic/prometheus-developers/3n7n0PGG7Vw https://medium.com/@griggheo/initial-experiences-with-the-prometheus-monitoring-system-167054ac439c
Metrics exporter for Amazon AWS CloudWatch. Contribute to prometheus/cloudwatch_exporter development by creating an account on GitHub.
I’ve been looking for a while for a monitoring system written in Go, self-contained and easy to deploy. I think I finally found what I was…
So far, your solution was the only solution I found for exporting prometheus metrics to CW.
@Erik Osterman (Cloud Posse) ^
this should be done in a more official way by using Prometheus Operator
this tool to do it was already mentioned https://github.com/operator-framework
That makes sense.
Operator will allow a much better integration with Prometheus
(but I agree, our tool is simpler )
but @Jeremy, what you asked (to scape metrics from the metrics server instead of kube-state-metrics) could be done by installing
kube-prometheus and then scraping it, no?
yeah, i guess that’s what I’m asking. do i simply need to change the url in the values.yaml?
(I haven’t had time to look at the code you’re using to scape the metrics yet)
Add-on agent to generate and expose cluster-level metrics. - kubernetes/kube-state-metrics
Additionally, some monitoring systems such as Prometheus do not use Heapster(metrics-server) for metric collection at all and instead implement their own, but Prometheus can scrape metrics from heapster itself to alert on Heapster(metrics-server)’s health. Having kube-state-metrics as a separate project enables access to these metrics from those monitoring systems
i’ll have a look. I’m interested in surfacing the metrics described in this blog https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-4-the-kubernetes-api-server-72f1e1210770 to CW.
This is Part 4 of a multi-part series about all the metrics you can gather from your Kubernetes cluster.
i believe i should be able to get these without running the metrics server.
yea, it’s a very convoluted topic
metrics server is part of Kubernetes https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/#metrics-server
so I think you need to install Prometheus (via
kube-prometheus https://github.com/coreos/prometheus-operator/tree/master/helm/kube-prometheus for example ) and then will be able to scrape it using a scraping tool
Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes - coreos/prometheus-operator
prometheus-operator chart https://github.com/coreos/prometheus-operator/tree/master/helm/prometheus-operator
in our collection of
helmfiles, we have examples on how to do it https://github.com/cloudposse/helmfiles/tree/master/helmfile.d
Comprehensive Distribution of Helmfiles. Works with
helmfile.d - cloudposse/helmfiles
@Jeremy (Cloud Posse) we’ve added better formatting for prometheus alerts. See this PR by @Igor Rodionov https://github.com/cloudposse/helmfiles/pull/48
What Template prometheus alerts Why To have nice alerts
@Jeremy (Cloud Posse) has joined the channel
Also, deploying Grafana dashboards with configmaps: https://github.com/cloudposse/helmfiles/pull/18
what Update to use sidecar pattern Provide integration with kube-prometheus (collecting metrics / import grafana dashboards) Collect metrics for nginx ingress and display them Fix Portal Fix nginx…
I think it’s a different Jeremy :)
@Jeremy (Cloud Posse) is with PopChest <— using our older versions of kube-prometheus
But, @Erik Osterman (Cloud Posse) There is also @Jeremy Cowan, who I think you meant to be referring to.
for context, we’re moving this discussion here: https://sweetops.slack.com/archives/CB2PXUHLL/p1537386287000100
I’ve literally never had this issue before, docker image built on my local machine gets uploaded to ECR and when i deploy that image, it comes across corrupt with the exact same configuration and it makes 0 sense. It appears my image being uploaded is corrupt and i’ve been troubleshooting for hours
I’d be curious to know, are you getting that composer error from the log from
kubectl logs <pod name> or elsewhere?
Getting it from the container that is being build
image: report-portal:develop built locally works 100% of the time. Deployed using this, somehow libraries are lost and dropped.
what happens if you pull and run that ECR image locally?
When i build the ECR image locally, same error is produced
But when I build the image that is being uploaded, it works smoothly
I have 0 idea why my ECR image would be corrupt but the one building that image works
@Matthew please explain it again step-by-step for people to be able to help you, something like this…
- I have a Dockerfile (show it here) which I build locally and then start the container and it works locally
- Then I push the already built image to ECR manually
- When the image gets deployed from the ECR repo to Kubernetes, it throws errors
explain what works and what does not, where you build it and how
The move gives NetApp more of a DevOps spin on multi-cloud deployments.
curious if anyone has given creating an Operator themselves a go: https://github.com/operator-framework
chances are you are (either knowingly or unknowingly) already using one or many in your cluster
we’re using all the prometheus operators in our latest rollouts
I would love to see a
haven’t yet considered taking the plunge to write one
is there something in particular you want to build?
nothing concrete yet, have been trying to think of some ideas
have you used it?
was it easy to get up and running?
are you guys using it now?
it was the backup tool of choice at my last company, used it on every single cluster, took me a morning to put in place
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
whenever i had issues with it, the main guy, andy, was very quick to help
they have a channel on the main k8s slack
This allows values.yaml to be simplified like: replicaCount: 1 image: repository: rms1000watt/dummy-golang-project tag: latest pullPolicy: IfNotPresent deployment: enabled: true service: …
can someone point me towards changing timezone on kops created instances
@rohit.verma i think you can use https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#hooks
Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management - kubernetes/kops
as described here https://github.com/kubernetes/kops/issues/1794
We have a need to create our kops nodes in PST rather than UTC. It would be helpful if kops either had an option to set the instance timezone via the ig config, or if ec2 user data could be passed …
to helm or not to helm… (as a n00b).. that is the question
im a fan of helm, but i would suggest not using it until you’re extra comfy with kubectl and writing manifests yourself
it can feel like a bit of a black box, even if you wrote the charts yourself
so we recently did an engagememnt with caltech students in a research lab
what worked well for them was to first write all the resources by hand to get comfy with it - the way @Max Moon describes
then write the charts later
also, i’d like to point out https://github.com/cloudposse/charts/tree/master/incubator/monochart
The “Cloud Posse” Distribution of Kubernetes Applications - cloudposse/charts
which is our declarative helm chart. this means it will work for the most common use-cases and you won’t need to write a custom chart. you just define all the settings in
the above is what i recommend as well, every company i’ve used helm at, i’ve used the same approach
That’s awesome–I appreciate the direction
also think about your CICD tool(s) of choice
for instance, last i used it, spinnaker was pretty opinionated and wanted control over deployments and deployment management, so we couldnt use helm
that might have changed, but just something to keep in mind
pretty much everything else should play along fine
kind of doing some skunkwork at the moment
Just want to get looking at the right stuff in the right way (that has worked for others)
Terraform Helm provider. Contribute to mcuadros/terraform-provider-helm development by creating an account on GitHub.
terraform alternative for helmfile
hadn’t seen that before, but looks to be identical in interface to
we have not, the built-in codefresh one seems to be good enough for now
Populates Kubernetes Secrets from AWS Parameter Store - cmattoon/aws-ssm
Have you seen this project, Erik?
I liked how it transparently fetches the secrets when the container runs in AWS, but still allows you to set them directly when you work with docker-compose locally.
Environment variable-based AWS Parameter Store command shim - glassechidna/pstore
Authentication server providing SSO, 2FA and ACLs for web apps. - clems4ever/authelia
Good alternative for bitly oauth2 proxy?