Guess luajit is its own language now.
ooh, pathfinding. thanks waraya!
add lua, remove shebangs
All models are wrong, some models are useful.
Okay I keep wishing for what the Programming Language Shootout (now aka the Benchmarks Game) used to be, a pile of order-of-magnitude benchmarks of various programming languages and implementations without too much bullshit in them. Now there's things in there that use identity for a hash function, things that use bleeding-edge algorithms or hand-written SIMD, or other bullshit like that; basically, they're massively overfitted.
So the goal is to try to make a set of idiomatic, casual benchmark programs that showcase basic operations, used in basic ways, with basic libraries. We want bog-standard, pretty reasonable code that doesn't do anything especially fancy, so we can see see what kind of performance we expect from the first pass of the program.
Actually, "casual benchmarks" is a pretty good name.
For example we might have:
We will not have:
The point of this is to have a vague idea of the performance you can expect from normal code.
See also: https://programming-language-benchmarks.vercel.app/ Probably best to just cherry-pick programs from that and make it generate actual graphs, tbh. The Benchmarks Game repo is also here: https://salsa.debian.org/benchmarksgame-team/benchmarksgame
Both are MIT licensed, so this should be too.
Hmmm.
--release
without trying to fiddle with LTO at all or twist other knobs, for C this would be -O2
or maybe -O3
but again no knobs beyond that.Suggestions welcome but also each new toolchain is gonna add a certain amount of work, mostly in the setup and installation of all these things on a particular system, so I reserve the right to say "sorry but I don't care that much".
Maybe divide them into "number-crunching", "data-mongling" and "IO"? I want a representative sample of things beyond just heckin' KNN