#random (2020-07)
Non-work banter and water cooler conversation
A place for non-work-related flimflam, faffing, hodge-podge or jibber-jabber you’d prefer to keep out of more focused work-related channels.
Archive: https://archive.sweetops.com/random/
2020-07-04
https://github.com/augmentable-dev/gitqlite Query git repo contents with SQL
Query git repositories with SQL. Uses SQLite virtual tables and go-git - augmentable-dev/gitqlite
2020-07-07
Sometimes I wonder where the random names for AWS API calls come from.
Please appreciate the maniac at AWS who was there to say:
- “NO!!!!!!!! WE CANNOT NAME THE API CALL DELETE-IMAGE!!!! WE DO IT IN BATCHES SO IT HAS TO BE BATCH-DELETE-IMAGE!!!!!!”
- “But that’s the only way to delete an image….”
- “WHAT DO YOU EVEN KNOW ABOUT ENGINEERING?!”
you want other jokes? Date Format are never the same. Depending on which endpoint you call, you have VpcID
, VPCID
, VpcID
you have my total support, I face all the time that annoying ugly patterns
yeah, but different conventions are from lack of coordination (i once had it too at a not-that-bad team actually), peculiar names are most likely from egomaniacs
you’ve got my support for dealing with random capitalisation in response JSONs though
and even better, with sdk-go I faced several times answers with missing fields in struct. Not even a nil pointer, just nothing.
I was forced to defer + recover
fortunately i only write synchronous code these days
My favorite is the unknown if an object uses name
or id
and that’s never name
, it’s dbInstanceName
, or else
oh yes, names in create calls becoming ids here and there
also the fact that aws cli writes to stdout but doesn’t take advantage of stdout being a stream and still forcing you to paginate… but that’s a more common problem
2020-07-08
Anyone else downsizing on their cars? We barely have a need for 1 car so I think I’m going to sell mine and share my wife’s. I don’t see myself going into work for over a year so I’ll save on payments, insurance, and maintenance.
Yup. I’m going to return mine to the automaker when the leasing ends. I work and live in Barcelona and I’m barely using it apart from some weekend trips (previously to the corona)
I did some calculations and it’s cheaper to rent it 1 or 2 times per month than paying the leasing. So no brainer.
2020-07-09
Issue: agent: none doesn’t work with post actions Jenkins: by design
To generalise:
Issue: A non-issue in any other mature CI is an issue on Jenkins Jenkins: by design
GitHub has introduced READMEs for profiles that allow to display profile views
That’s a tracking pixel. Gross! :D
GitHub has introduced READMEs for profiles that allow to display profile views
2020-07-10
We can’t send email more than 500 miles (2002) - http://web.mit.edu/jemorris/humor/500-miles
2020-07-13
2020-07-16
EU’s top court invalidates data-sharing pact with US | News | Al Jazeera https://www.aljazeera.com/news/2020/07/eu-top-court-invalidates-data-sharing-pact-200716091848578.html
The ruling could require EU regulators to vet any new transfers due to concerns that the US can snoop on people’s data.
2020-07-19
Friggin love Gary V: https://www.instagram.com/tv/CC02SWaHC8a/?igshid=1x9kb8tbfe2ir
2020-07-20
Any mysql admins with tables containing more than 1M rows here? We have a table that will grow 100k rows per day (already at 5M rows) and I wonder what should I start considering.
what table engine are you using?
I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies …
Things I gleaned from a quick read:
- Being able to normalize the data is huge
- The disk read time starts playing a major role when your DB is monolithic. This is starting to get into big data needs using distributed data stores and projects like Hadoop
- Spending time to carefully design the schema will be very valuable later on
@James Huffman all tables are using innodb
it’s fine because selects are faster and we have a batch of 100k writes happening within few hours from an import task and ~17k writes happening through out the day from user usage.
table is rather simple 10 columns and no foreign keys and a autoincrement pk
so the selects are also fine
especially that we don’t do joins
Do you need all data from the table to be quickly accessed? Or the query & fetch time is not so critical?
it’s fine below few secs to retrieve the data
awesome, in that case I think that it should be pretty straightforward while you have the data indexed
We have some tables with more than 10M rows and the data need to be quickly accessed, so our main constraint is the innodb buffer pool, because retrieving data with a join from 2 tables with 3+M and 10+M takes about 40 seconds only from fetching from the storage.
yeah with the current plan to ingest 100k new rows to this table per day we will hit 10M rows in less than 2 months from today
things to consider:
- is your data well indexed?
- do you have enough RAM to keep a lot of the indexes in memory?
- is your data time-sensitive and do you actually need to hold all of that data in the same table/DB instance at all times? or can you archive some of it after it reaches a certain age?
@James Huffman sadly this data can’t be cleaned because of the purpose. I need to take a closer look at the memory usage. maybe the innodb buffers are too low as well.
I mean it’s currently running on db.t3.small and the avg app time spent in the db is around 40ms (including connection time) and rises to ~250ms for use operations when the import job is running
@vFondevilla that’s one of the ideas for the future, but now I want to know where my potential bottlenecks are before I will start thinking about changes to the database architecture
I managed a mysql instance with 5.10^9 rows, with 200-500k inserts/s and peaks at 800Mbit/s for write. Everything was overminded, we needed to use tokudb as engine and data was stored on a fusionIO flask drive. That was insane
2020-07-21
Anyone here generated their own docsets For Dash?
2020-07-23
Does the archive.sweetops.com site have better search than native slack? or is the idea just to have google indexing?
Think it was more that the free slack has limited history to 10,000 messages. Or did, until recently, but for the time being this slack workspace has free unlimited history…
Our team is temporarily upgraded for another month or so.
Then it will go back to free plan and 10k messages.
The slack archive contains history from the beginning of time (literally) even when our plan reverts to the free tier.
2020-07-27
https://medium.com/@adefemi171/containerization-part-2-from-lxd-to-kubernetes-6d595035fbc9
I got a small piece up for you lovers of containerization
In my last post (which was the part 1) where I introduced containerization and the types of containers I know, I said I will be writing on…
2020-07-29
Shameless plug: Here’s my pops jamming and singing on guitar. Proud of him!! https://youtu.be/8HlNv39cluE
2020-07-30
2020-07-31
I’d like to put out a poll (completely anonymous, even I won’t know who answered what) on Employee Net Promoter Score. Reason for doing so here is to weigh my company’s score against a more broad score across many companies. As an encouragement to vote/bribe, here’s a cookie
cookie
lol this was the first gif that popped up with /giphy, but It’s not really what I was shooting for… Kinda grungy
cookie
On a scale of 1-10, how likely are you to recommend working at your company to a friend?
We can call it the “DevOps Industry NPS”