#random (2019-03)
Non-work banter and water cooler conversation
A place for non-work-related flimflam, faffing, hodge-podge or jibber-jabber you’d prefer to keep out of more focused work-related channels.
Archive: https://archive.sweetops.com/random/
2019-03-01
2019-03-03
Curious anyone here made a pixelbook their everyday driver? Given crostini and debian available - don’t see what other blockers there may be from doing so
2019-03-04
@sarkis i’ve been thinking about that also. it’s good enough for kelsey hightower, at least https://twitter.com/kelseyhightower/status/1097961280990720000
2019-03-05
DigitalOcean Marketplace is a platform where developers can find preconfigured applications and solutions to get up and running even more quickly.
Developed by Uber, Kraken is an open source peer-to-peer Docker registry capable of distributing terabytes of data in seconds.
2019-03-07
2019-03-11
2019-03-12
@Nikola Velkovski https://www.youtube.com/watch?v=TgqiSBxvdws The world can be one together… cosmos without silos
2019-03-14
Any body knows how to Connect two monitors in a MacBook air?
We use some of these at work: https://www.elgato.com/en/dock/thunderbolt-3
Elgato Thunderbolt™ 3 Dock enables you to connect everything to your computer at once.
nice
2019-03-15
@Richy de la cuadra I’ve used this in my previous company and it worked really well! https://www.youtube.com/watch?v=t2zA2gOeT8E
perfect!
2019-03-19
Hi, guys can I ask for some guidance with https://github.com/cloudposse/prometheus-to-cloudwatch ?
I am looking for some “best practice” what and how it should be monitored / alarmed inside CloudWatch
for instance I am running EKS and have some limit of pods on each node and I am looking to get % of node occupancy
it seems easy like sum(kube_pod_status_phase) / sum(kube_node_status_allocatable_pods) * 100 in prometheus but I have no idea how can I do this in CW and have alarm on top of that because it seems that CW cannot do this https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html without having metrics connected to the “graphed metrics” and even if I do, there is a limit of 10 metrics underlying each alarm
Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch - cloudposse/prometheus-to-cloudwatch
Metric math enables you to query multiple CloudWatch metrics and use math expressions to create new time series based on these metrics. You can visualize the resulting time series in the CloudWatch console and add them to dashboards. For an example using AWS Lambda metrics, you could divide the
@Milan Dasek prometheus-to-cloudwatch will scrape any Prometheus endpoint you point it to
it’s the job of the endpoint to expose the required metrics
in the example we use <https://github.com/helm/charts/tree/master/stable/kube-state-metrics>
, take a look if it provides the metrics you want to see
Add-on agent to generate and expose cluster-level metrics. - kubernetes/kube-state-metrics
https://github.com/cloudposse/prometheus-to-cloudwatch by itself does not create nor expose any metrics
Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch - cloudposse/prometheus-to-cloudwatch
Hi all, I’ve been researched about load test tools. Do you know any open source tools able to extract important metrics for a good analysis and report ?
A collection of best practices, workflows, scripts and scenarios that Cloud Posse uses for load and performance testing of websites and applications (in particular those deployed on Kubernetes clus…
This is our strategy
My intention is to study the performance of AWS Cloudfront and your bahavior across regions.
Sounds like that is better done using RUM
Awesome @Erik Osterman (Cloud Posse)!
Real user monitoring (RUM) is a passive monitoring technology that records all user interaction with a website or client interacting with a server or cloud-based application. Monitoring actual user interaction with a website or an application is important to operators to determine if users are being served quickly and without errors and, if not, which part of a business process is failing. Software as a service (SaaS) and application service providers (ASP) use RUM to monitor and manage service quality delivered to their clients. Real user monitoring data is used to determine the actual service-level quality delivered to end-users and to detect errors or slowdowns on web sites. The data may also be used to determine if changes that are promulgated to sites have the intended effect or cause errors. Organizations also use RUM to test website or application changes prior to deployment by monitoring for errors or slowdowns in the pre-deployment phase. They may also use it to test changes within the production environment, or to anticipate behavioural changes in a website or application. For example, a website may add an area where users could congregate before moving forward in a group (for example, test-takers that log into a website individually over a period of twenty minutes and that then simultaneously begin taking a test), this is called rendezvous in test environments. Changes to websites such as these can be tested with RUM. As technology shifts more and more to hybrid environments like cloud, fat clients, widgets, and apps, it becomes more and more important to monitor from within the client itself. Real user monitoring is typically “passive monitoring”, i.e., the RUM device collects web traffic without having any effect on the operation of the site. In some limited cases, it also uses JavaScript injected into a page or native code within applications to provide feedback from the browser or client. This is also referred to as Real-time Application Monitoring that focuses on the End-User Experience (EUE) and is a key component in the application performance management technology space.Passive monitoring can be very helpful in troubleshooting performance problems once they have occurred. Passive monitoring differs from synthetic monitoring with automated web browsers in that it relies on actual inbound and outbound web traffic to take measurements.
Yeah man. An alternative open source do you know some like ?
the main tools about are private.
I haven’t used one that is open source (Pingdom, Datadog and new relic offer it as a service )
That said this might get you on your way: https://github.hubspot.com/bucky/
Bucky is a Javascript library to measure the performance of your web app directly from your users’ browsers. Is is free and open source and was developed by HubSpot developers Adam Schwartz (@adamfschwartz) and Zack Bloom (@zackbloom).
yeah.. great !
Now this is not for load testing per say, but what it will tell you is what your users are actually experiencing
Load testing a CDN is kind of pointless
(Unless you have access to a bot net!)
lol.. It is not the point ..
A CDN is a resource as another . I understand that the CMP wants to guarantee your own SLA.
I want to study the flow when these resources are topping. you got it ?
Maybe not. But ok !
lol
Thanks for your help!
Report back what you find!
Sure !
2019-03-20
like GitBooks but free
@Felipe Ribeiro https://www.sitespeed.io/ is a great tool for testing web page performance, including measuring the requests from CDNs, if that’s what you’re after
Sitespeed.io is an open source tool that helps you analyse and optimise your website speed and performance, based on performance best practices.
Google have gained a Guinness World Record with this effort that took 2,795 node days of computation time and 17 PB of disk IO. As an interesting aside, Corey Quinn calculates it would have cost about $226k on GCP versus $180K on AWS
They were trying to predict which new service they should kill that is used by a lot of users XD
I love the fact, that they calculated 31,415,926,535,897 decimal places.
man i don’t even know where else to talk about this. I’m pushing heroku logs to graylog which goes into elasticsearch. using graylog’s geo mapping plugin. but the _geolocation field a string. its a geo_point put of string type. So in Graylog I can create a map graph.
well we’re using grafana. and worldmap plugin in grafana needs geo_point to be of geo_point type in elasticsearch. argggh this is so annoying lol
2019-03-21
you may already know this but logstash might help you, although you’ll introduce some complexity but if it’s only for that field you might want to check it out.
Some new foundation:
https://hub.packtpub.com/google-to-be-the-founding-member-of-cdf-continuous-delivery-foundation/
https://github.com/tektoncd/pipeline
Opinions?
New initiative provides neutral home for Jenkins, Jenkins X, Spinnaker, Tekton projects and the next generation of continuous delivery collaboration…
The Continuous Delivery Foundation (CDF) serves as the vendor-neutral home of many of the fastest-growing projects for continuous delivery.
On Tuesday, Google announced that it is one of the founding members of the newly-formed Continuous Delivery Foundation (CDF). As a part of its membership, Google will be contributing to two projects namely Spinnaker and Tekton.
A K8s-native Pipeline resource. Contribute to tektoncd/pipeline development by creating an account on GitHub.
Who knew it was that expensive to operate the infrastructure of a popular ride hailing app? :-)
probably not optimized because of too many resources to manage
2019-03-22
A summary of how I would define a distinguished engineer or technical fellow.
In computer networking, HTTP 451 Unavailable For Legal Reasons is an error status code of the HTTP protocol to be displayed when the user requests a resource which cannot be served for legal reasons, such as a web page censored by a government. The number 451 is a reference to Ray Bradbury’s 1953 dystopian novel Fahrenheit 451, in which books are outlawed. 451 could be described as a more explanatory variant of 403 Forbidden. This status code is standardized in RFC 7725. Examples of situations where an HTTP 451 error code could be displayed include web pages deemed a danger to national security, or web pages deemed to violate copyright, privacy, blasphemy laws, or any other law or court order. The RFC is specific that a 451 response does not indicate whether the resource exists but requests for it have been blocked, if the resource has been removed for legal reasons and no longer exists, or even if the resource has never existed, but any discussion of its topic has been legally forbidden (see superinjunction). Some sites have previously returned HTTP 404 (Not Found) or similar if they are not legally permitted to disclose that the resource has been removed. Such a tactic is used in the United Kingdom by some internet service providers utilising the Internet Watch Foundation blacklist, returning a 404 message or another error message instead of showing a message indicating the site is blocked.The status code was formally proposed in 2013 by Tim Bray, following earlier informal proposals by Chris Applegate in 2008 and Terence Eden in 2012. It was approved by the IESG on December 18, 2015. It was published as RFC 7725 in February 2016. HTTP 451 was mentioned by the BBC’s From Our Own Correspondent program, as an indication of the effects of sanctions on Sudan and the inability to access Airbnb, iOS’s App Store, or other Western web services.After introduction of the GDPR in European Economic Area (EEA) many websites located outside EEA started to serve HTTP 451 instead of trying to comply with this new privacy law.
451 is new since GDPR right?
I remember it popping up then as it was easier to block EU than comply to GDPR
Nah its been in use a while; the RFC took the usual years to filter into place
some isps use it as part of their expected role in blocking “child” nasties; which of course showed they could mass block ips and was quickly abused by copyright holders to block torrent sites
Filter and sort by GitHub stars, funding, commits, contributors, hq location, and tweets. Updated: 2019-03-22 0132Z
Similar https://openapm.io/landscape
Filter and sort by GitHub stars, funding, commits, contributors, hq location, and tweets. Updated: 2019-03-22 0132Z
awesome visualization of the cloud landscape
@Igor Rodionov
whats the state of the art in terms of “dotfile managers”
Yet Another Dotfiles Manager. Contribute to TheLocehiliosan/yadm development by creating an account on GitHub.
anything better than yadm
?
kind’a wish there was a popular one in go - single installable binary
I believe antigen or antibody are single go bin
https://getantibody.github.io/ looks like a winner to me
He was fired after four weeks, ripped off the credentials of former colleague “Speedy”, and will be mulling it all over for two years in jail.
In the last few years, I’ve had the pleasure to work with a lot of talented Software Engineers.
Yaaaas
In the last few years, I’ve had the pleasure to work with a lot of talented Software Engineers.
meh - while I agree with his positives of makefiles, make is probably one of the least portable build tools out there since every implementation requires its own specific foo. if that weren’t the case, we wouldn’t have built autoconf/imake/etc nor have manually written a zillion shell scripts called ./configure
that generate makefiles…
Yes good point. But combine make and autoconf and we get ansible :-)
Just because a ton of tools exist doesn’t mean you have to use them all.
but we’re gonna die trying!
I just mean Make is plenty portable if you don’t conflate it with all the alternatives.
make is plenty portable until you do anything with it, yeah. but tiny little variations in differences add up significantly. much like anything bsd vs gnu.
Basically every single one of my own projects has a Makefile which does the basic automation tasks needed for those projects. I just meant to communicate it is probably not worth FUD’ing people away from Make for the reasons given is all. It’s extremely useful and I agree with the author that it would be helpful if my coworkers and peers picked up basic Makefile skills.
Yea, I’ve reached the conclusion that make
is pretty nice for defining a simple interface for interacting with a project, but it shouldn’t be treated like a full-fledged programming language
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
this is an example of what I mean by “interface”
it shows how to interact with a project, but doesn’t do anything we couldn’t easily run by hand
Agree 100% that is my use for Makefiles, its just simple entrypoints for project interaction
which at the end in most cases uses other tools, it just “simplifies” the simple use cases
eg make run
Which might docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
and print the dynamic ports
that sort of thing
Make is my solution to “shell scripts in wiki docs” hell.
From: Run these 18 commands
To: run make setup
how would that be different than run ./setup.sh
or ./setup.py
or ./setup.rb
or… ? it’s just moving the code into the repo either way…..
The problem is the shell scripts don’t have consistent ways in which they operate. Some read envs, others don’t. Some parse args others don’t. Some use positional arguments others don’t.
Make limits how you can call a target and you can only use envs to pass information.
That makes it a standardized interface
2019-03-25
@Milan Dasek late to the party but look at aggregation rules for precalculating and exposing a metric
An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
2019-03-26
My first real job was in advertising. I worked as a copywriter for an agency called Benton & Bowles in New York City. An artist or entrepreneur’s first job inevitably bends the twig. It s…
When you, the student writer, understand that nobody wants to read your shit, you develop empathy.
Lol
2019-03-27
Anyone here familiar with Hashicorp Consul? I can’t help but feel I’m doing something pretty wrong. I’ve got an autoscale group with 3 instances in it, each running the consul:1.4.4 docker container, but they won’t bootstrap properly. once all three are up, if i restart the docker containers, they start to work. it’s like the first instance starts up and doesn’t find the other two, and the -retry-join option with aws discovery doesn’t keep looking for new instances with the tag. Do I need to manually wait for at least 3 instances to be up before I start consul up on each of these instances?
Hi Alex, what does docker logs
says for the consul containers?
Also which Os are you using and how do you bootstrap them ?
It’s a long shot but maybe the bootstrap script runs before the network is up ?
So, the instances all show some combination of this:
2019/03/27 15:33:22 [INFO] agent: Discovered LAN servers: 172.20.44.160 172.20.46.61
2019/03/27 15:33:22 [INFO] agent: (LAN) joining: [172.20.44.160 172.20.46.61]
2019/03/27 15:33:29 [ERR] agent: failed to sync remote state: No cluster leader
2019/03/27 15:33:30 [WARN] raft: no known peers, aborting election
2019/03/27 15:33:32 [INFO] agent: (LAN) joined: 1 Err: <nil>
2019/03/27 15:33:32 [INFO] agent: Join LAN completed. Synced with 1 initial agents
And then it will fail with just a lot of
2019/03/27 15:35:07 [ERR] agent: Coordinate update error: No cluster leader
2019/03/27 15:35:25 [ERR] agent: failed to sync remote state: No cluster leader
but when i restart the first instance that launched, it started working right away
docker run -d --net=host --name=consul -v /consul:/consul consul:1.4.4 consul agent -server -ui -bind=172.20.36.141 -client=0.0.0.0 -retry-join 'provider=aws tag_key=aws:cloudformation:stack-name tag_value=dev-consul' -bootstrap-expect=3 -data-dir=/consul
hmm why do you have bind set to a instance ip ?
that doesn’t seem right
It complains if i take the bind out, I’m pretty sure. Let me double check. Yeah,
==> Multiple private IPv4 addresses found. Please configure one with 'bind' and/or 'advertise'.
Environment Variables: Use the CONSUL_CLIENT_INTERFACE and CONSUL_BIND_INTERFACE environment variables. In the following example eth0 is the network interface of the container.
$ docker run \
-d \
-e CONSUL_CLIENT_INTERFACE='eth0' \
-e CONSUL_BIND_INTERFACE='eth0' \
consul agent -server -bootstrap-expect=3
taken from the docs
Yeah, saw that, but…
docker logs -f consul
==> Found address '172.20.36.141' for interface 'eth0', setting bind option...
==> Found address '172.20.36.141' for interface 'eth0', setting client option...
==> Multiple private IPv4 addresses found. Please configure one with 'bind' and/or 'advertise'.
ok
then try this
curl <http://169.254.169.254/latest/meta-data/local-ipv4>
or something similar to get the current up of the instance
Yeah, that’s how I’m getting the address to setup the BIND-IP
argh sorry
No worries. If I could avoid having to do that metadata step I wouldn’t have been sad
But, I’m not sure that’s my problem anyways
yeah doesnt seem like it
what about the node-ids of the consul servers ? I just found an issue that it might be a problem
-disable-host-node-id
and also I assume that the /consul dir on the host OS is empty right?
Yeah, I guess my question is… does -retry-join
periodically re-check for new instances with the provided tags
-retry-interval - Time to wait between join attempts. Defaults to 30s.
-retry-max - The maximum number of -join attempts to be made before exiting with return code 1. By default, this is set to 0 which is interpreted as infinite retries.
so it’s infinite
I was just checking my old working docker command for consul and everything seems to be the same
you just don’t have the -datacenter
argument
but it defaults to dc1
Yeah, I’m okay with dc1 for now
/consul/
on the host only contains an empty /consul/config
file because it complained otherwise
Curious why that retry didn’t find new instances
Good to know it should work that way. Downside, testing cycle for a “new” cluster is like… as long as it takes autoscaling to spin up 3 new instances after I can the existing ones. I didn’t see that it was working that way, but, I’ll poke around a bit more and find out.
hmmm I don’t think consul needs to persist it’s data
you should definitelly try running it without that
without -data-dir
? It was giving me grief, let me try though. Yeah.
$ docker run -d --net=host --name=consul -v /consul:/consul consul:1.4.4 consul agent -server -ui -bind=$(curl -s <http://169.254.169.254/latest/meta-data/local-ipv4>) -client=0.0.0.0 -retry-join 'provider=aws tag_key=aws:cloudformation:stack-name tag_value=dev-consul' -bootstrap-expect=3
86ba1b7d7a420374d971969daade4835fa5f554b3f0ec2ad4819f6cec39dde6a
[ec2-user@ip-172-20-36-141 ~]$ docker logs -f consul
==> data_dir cannot be empty
argh yeah
It’s fine, this is some fine tuning for me. Really, I’d love to see a working production-ready cluster built with autoscale groups in cloudformation, but all the stuff I find is really dated or incomplete
I feel like this should be a solved problem
well I had the smae but with terraform
and on a different os
What I have now was basically converted from various terraform modules, even hashicorps
but you are sure that -v /consul:/consul
the dir on the host is empty ?
when the host first comes up, yes
ok so that’s how you test, terminate all and wait
good
It will complain that /consul/config
doesnt exist, so in my userdata I have
# Consul data directory
mkdir -p /consul
chmod 777 /consul
touch /consul/config
and I assume the ports are open between the instances ?
so I just tried the command locally
and I get the same error but because I don’t have other consuls to connect to
docker run --net=host --name=consul4 consul:1.4.4 consul agent -server -ui -bind=192.168.10.181 -client=0.0.0.0 -retry-join 'provider=aws tag_key=aws:cloudformation:stack-name tag_value=dev-consul' -bootstrap-expect=3 --data-dir=/consul
bootstrap_expect > 0: expecting 3 servers
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.4.4'
Node ID: 'ba7904d7-c5ef-2547-01d6-15988833964c'
Node name: 'awesome-pc'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 192.168.10.181 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2019/03/27 19:42:09 [INFO] raft: Initial configuration (index=0): []
2019/03/27 19:42:09 [INFO] raft: Node at 192.168.10.181:8300 [Follower] entering Follower state (Leader: "")
2019/03/27 19:42:09 [INFO] serf: EventMemberJoin: awesome-pc.dc1 192.168.10.181
2019/03/27 19:42:09 [INFO] serf: EventMemberJoin: awesome-pc 192.168.10.181
2019/03/27 19:42:09 [INFO] consul: Handled member-join event for server "awesome-pc.dc1" in area "wan"
2019/03/27 19:42:09 [INFO] discover-aws: Address type is not supported. Valid values are {private_v4,public_v4,public_v6}. Falling back to 'private_v4'
2019/03/27 19:42:09 [INFO] discover-aws: Region not provided. Looking up region in metadata...
2019/03/27 19:42:09 [INFO] consul: Adding LAN server awesome-pc (Addr: tcp/192.168.10.181:8300) (DC: dc1)
2019/03/27 19:42:09 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
2019/03/27 19:42:09 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
2019/03/27 19:42:09 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
2019/03/27 19:42:09 [INFO] agent: started state syncer
2019/03/27 19:42:09 [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce k8s os packet scaleway softlayer triton vsphere
2019/03/27 19:42:09 [INFO] agent: Joining LAN cluster...
2019/03/27 19:42:16 [ERR] agent: failed to sync remote state: No cluster leader
2019/03/27 19:42:19 [WARN] raft: no known peers, aborting election
2019/03/27 19:42:21 [ERR] agent: Join LAN: discover-aws: GetInstanceIdentityDocument failed: EC2MetadataRequestError: failed to get EC2 instance identity document
caused by: RequestError: send request failed
caused by: Get <http://169.254.169.254/latest/dynamic/instance-identity/document>: dial tcp 169.254.169.254:80: connect: no route to host
2019/03/27 19:42:21 [WARN] agent: Join LAN failed: No servers to join, retrying in 30s
Yeah, exactly. When the first instance is launched by the ASG, it only sees one instance (itself) from the -retry-join
option I have
but check the last message
The second one sees two, the third one then sees all three. I restart the first one, it sees all three, a leader is elected
Oh yeah, I don’t remember seeing that message. Let me can my instances and see what happens, only takes like 5 minutes
or just
use docker-compose and try to cluster 3 dockers with the same command
let’s switch to a thread
we’ve already polluted random enough
good call
well… this is kind of a race condition problem docker-compose is too fast
there shouldn’t be a race condition
afair
with the retry
I agree, from what the docs say
docker-compose can do dns
instead of aws you can specify the names of the consuls
and retry-join
Yeah, I’m just bouncing the whole cluster, see if I can recreate and watch for that message talking about the retry wait
cool
So, I had the same issue again… but it seems to have resolved itself, yet I don’t see any of the retry steps you saw
2019/03/27 19:48:12 [INFO] agent: started state syncer
2019/03/27 19:48:12 [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce k8s os packet scaleway softlayer triton vsphere
2019/03/27 19:48:12 [INFO] agent: Joining LAN cluster...
2019/03/27 19:48:12 [INFO] discover-aws: Address type is not supported. Valid values are {private_v4,public_v4,public_v6}. Falling back to 'private_v4'
2019/03/27 19:48:12 [INFO] discover-aws: Region not provided. Looking up region in metadata...
2019/03/27 19:48:12 [INFO] discover-aws: Region is us-east-1
2019/03/27 19:48:12 [INFO] discover-aws: Filter instances with aws:cloudformation:stack-name=dev-consul
2019/03/27 19:48:12 [INFO] discover-aws: Instance i-0f44119c6b9e9a862 has private ip 172.20.36.82
2019/03/27 19:48:12 [INFO] agent: Discovered LAN servers: 172.20.36.82
2019/03/27 19:48:12 [INFO] agent: (LAN) joining: [172.20.36.82]
2019/03/27 19:48:12 [INFO] agent: (LAN) joined: 1 Err: <nil>
2019/03/27 19:48:12 [INFO] agent: Join LAN completed. Synced with 1 initial agents
2019/03/27 19:48:19 [ERR] agent: failed to sync remote state: No cluster leader
2019/03/27 19:48:20 [WARN] raft: no known peers, aborting election
So it came up, only saw one instance in aws
but eventually the other two members joined, it elected a leader, and everybody is happy
¯_(ツ)_/¯
Why did it not do this last night with no changes from then to now? lol
¯_(ツ)_/¯
I remember once I had problems with stale configs so you should keep an eye out on thouse mounted folders
and istead of persising the state
do a consul backup to s3
Well, I need this state to persist eventually
A repository that creates a docker image that auto discovers consul nodes and backs the configuration to s3. - parabolic/consul_backup
not my best work but you get the idea
allright
well I am glad I coould be of some help
Yeah, because right now… if I have data in the k:v store, and all three instances die, i lose it
My plan was to automate backups at some interval to s3 and just deal with it, because it’s pretty unlikely to lose 3 instances all at once, but i’m still not super comfortable with that idea
yes that is kinda dangerous
@Erik Osterman (Cloud Posse) @loren @keen I might be getting carried away myself: https://gist.github.com/dustinlacewell/59dc2319812c3b6f3083b71aed046d05
That last one is pretty dope though. It opens an SSH tunnel through an EC2 instance to RDS, then starts a Docker container of the service that uses the DB to run migrations, while pointing the Docker container at localhost (because there is a tunnel) and then it kills the SSH tunnel.
Pretty handy
Overall, getting pretty DRY
it’s clear enough to me
LGTM too
I didn’t know you could do something like that
@Igor Rodionov
One thing I just realized is that all of those are evaluated even that rule is not being executed. So I have edited the gist to show how to move those into the rule itself that they are only executed when that rule is.
what version of make
do you have?
caution will rodgers!
4.2.1
eval
is really funky in make
It basically just means “treat this as makefile syntax rather than a shell execution”
interesting, i use that same syntax to set per-target variables, also with make 4.2.1…
so if you wanna define a make variable within a rule you gotta use the $(eval ) function
What are you trying to show?
did you know eval
executes even if you never call that target?
a Makefile
is first and foremost a template
Only if it is outside of a rule
In the updated gist
None of the $(eval ) calls are evaluated untill you speciifcally select the migrate-rds
rule
That why I moved them from “per-rule” variables, to lines inside the rule.
Because before it was trying to open up an SSH tunnel even for unrelated commands.
Now this doesn’t happen.
yeah, when i do something like this:
terraform/install: TERRAFORM_VERSION ?= $(shell $(CURL) <https://checkpoint-api.hashicorp.com/v1/check/terraform> | jq -r -M '.current_version' | sed 's/^v//')
then TERRAFORM_VERSION
is not set when i run a different make target
No but the right hand of ?= is evaluated
so you’re curling
curious
you should move that inside terraform/install
as I have, with $(eval TERRAFORM_VERSION ?= …)
change it to run touch
and no file exists after?
terraform/install: TERRAFORM_VERSION ?= $(shell touch fooooooobar)
@loren use $(call) or $(eval) on the right side
so maybe no curl?
I don’t think I use call
anywhere
Me either until a couple hours ago
Functions too!
Pretty dope I can now do make secret SECRET=db_password
or make output MODULE=aurora RESOURCE=endpoint
to get secrets or Terraform outputs respectively
have you checked out #variant?
it’s make
on steroids
it allows for the definition of more user-friendly cli tools the way we use make
(or even call make
from variant
)
I’ll definitely look into it
2019-03-28
This post is contributed by Wesley Pettit, Software Engineer at AWS. As more companies adopt containers, developers need easy, powerful ways to test their containerized applications locally, before they deploy to AWS. Today, the containers team is releasing the first tool dedicated to this: Amazon ECS Local Container Endpoints. This is part of an ongoing open […]
CircleCi is as well
I checked out Variant but it seems quite limited compared to make, even in the pretty basic ways I use make. I guess if you don’t care about DRYness at all Variant is fine. It does produce a helpful usage string. @Erik Osterman (Cloud Posse) were you saying that you use Variant as a frontend to make?
variant is truly make on steroids
it’s not as well documented b/c that’s a lot of work
also, keep in mind you can use YAML anchors to keep things DRY
and yes, you can call out to make from variant
you can call out to anything from variant. the key is variant gives you a modern cli interface so that you can combine a mishmash of tools and present a common interface for all of them
Wrap up your bash scripts into a modern CLI today. Graduate to a full-blown golang app tomorrow. - mumoshu/variant
looking at all of his integration tests, you can see how to use it
(@mumoshu is in #variant as well)
explanation of YAML anchors
YAML anchors are pure love
also keep in mind that that variant bakes in documentation to each command, while in make there’s none of that
the *exit_on_errors
is an example of a YAML anchor
that’s very DRY
you can take the output from one task and use it as input to another task
you can do conditionals:
you can test your tasks, so it’s clear how they logically relate to each other
You can get parameter validation
no way native way to do that in make
Here’s our example: https://github.com/cloudposse/geodesic/blob/master/rootfs/usr/local/bin/kopsctl
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
Yeah if I try to introduce this people will throw a fit about the lack of documentation but thank you for pointing out that it does have some refactorability.
I did it by first going through all the pains solo, then with a task force of 5 core members and 4 part time supporters. We are now at the point where we are on-boarding all devs using mostly our own docs that we hope to (in most cases) contribute back
@Jan are you using variant?
Variant? I dont think so?
just back from 2 weeks vacation so im still working out who the fek I am never mind whats all going on
Maybe I’ll just throw it ontop just to call make at first and get people hooked on the self-documenting nature
i love this meme
Omg thank you
I’ve never seen a tech one of that
Amazing meme
How do I DL that
2019-03-29
2019-03-30
Some interesting projects by Samsung: https://github.com/samsung-cnct
Samsung SDS Cloud Native Computing Team. Samsung SDS Cloud Native Computing Team has 30 repositories available. Follow their code on GitHub.