i should set up a database to hold all the results of my weird little experiments, but the real problem in that is keeping the source code synced with each row so it's all reproducible.
so that is quite the problem since it also depends somewhat on the libraries i'm using and all that. plus it's actually important to keep any bugs and hacks in the code and so something like git/svn probably wouldn't make sense
and i'd probably want to retrospectively tag results based on those traits for filtering when i'm pulling the data out and ugh everything is terrible
i did write a script once that tar-gz'd itself and its output each execution but that was obnoxious and that's definitely not a database
here's an idea. you know those makefile alternatives that monitor which files are opened as dependencies by project files; I bet you could instead archive every dependency into the database for reproducibility. the big files wouldn't change very often, depending on the OS. ofc this doesn't include the kernel or its syscalls, and pulling the files out to actually use could be troublesome (is it enough for chroot?), but otherwise it seems like a simple brute solution
Cybrespace is an instance of Mastodon, a social network based on open web protocols and free, open-source software. It is decentralized like e-mail.