Random insight of the night: every couple years, someone stands up and bemoans the fact that programming is still primarily done through the medium of text. And surely with all the power of modern graphical systems there must be a better way. But consider:
* the most powerful tool we have as humans for handling abstract concepts is language
* our brains have several hundred millenia of optimizations for processing language
* we have about 5 millenia of experimenting with ways to represent language outside our heads, using media (paper, parchment, clay, cave walls) that don't prejudice any particular form of representation at least in two dimensions
* the most wildly successful and enduring scheme we have stuck with over all that time is linear strings of symbols. Which is text.
So it is no great surprise that text is well adapted to our latest adventure in encoding and manipulating abstract concepts.
@rafial Both accurate and also misses the fact that Excel is REGULARLY misused for scientific calculations and near-programming level things since its GUI is so intuitive for doing math on things.
Like, GUI programming is HERE, we just don't want to admit it due to how embarrassing it is.
@rafial Now what we need to do is make a cheap, easy to use version of it that is designed for what scientists are using it for it. Column labels, semantic labels, faster calculations, better dealing with mid-sized data (tens of thousands of data point range), etc
@Canageek I'm wondering, given your professional leanings if you can comment on the use of "notebook" style programming systems such as Jupyter and of course Mathematica. Do you have experience with those? And if so how do they address those needs?
Thanks @urusan, I found the article interesting, and it touched on the issue how to balance the coherence of a centrally designed tool with the need for something open, inspectable, non-gatekept, and universally accessible.
PDF started its life tied to what was once a very expensive, proprietary tool set. The outside implementations that @Canageek refers to were crucial in it becoming a universally accepted format.
I think the core idea of the computational notebook is a strong one. The question for me remains if we can arrive at a point where a notebook created 5, 10, 20 or more years ago can still be read and executed without resorting to software archeology. Even old PDFs sometimes break when viewed through new apps.
@Canageek @rafial You aren't processing those ShelX files on any sort of hardware (or software binaries) that existed in the late 1960's. At best, you're running the original code in an emulation of the original hardware, but you are probably running it on modern software designed to run on modern hardware
Software archeology is inevitable and even desirable
What we want is an open platform maintained by software archeology experts that lets users not sweat the details
@urusan @rafial No, they've kept updating the software since then so it can use the same input files and data files. I'm reprocessing the data using the newest version of the software using the same list of reflections that was measured using optical data from wayyyy back.
The code has been through two major rewrites in that time, so I don't know how much of the original Fortran is the same, but it doesn't matter? I'm doing the calculations on the same raw data as was measured in the 60s.
There is rarely a POINT to doing so rather then growing a new crystal but I know someone that has done it (he used Crystals rather then Shelx, but he could do that as the modern input file converter works on old data just fine)
The thing that's hard to keep running decades later is the code, and code is becoming more and more relevant in many areas of science.
Keeping old code alive so it can produce consistent results for future researchers is a specialized job
Ignoring the issue isn't going to stop researchers from using and publishing code, so it's best to have norms
@urusan @Canageek one other thing to keep in mind is that data formats are in some ways only relevant if there is code that consumes it. Even with a standard, at the end of the day a valid PDF document is by de-facto definition, one that can be rendered by extent software. Similar with ShelX scripts. To keep the data alive, one must also keep the code alive.
The semantics of, say, addition isn't going to change. Once you define c = a + b means adding a and b, then assigning the value into c, then you no longer need a reference implementation and you can treat this code like a well defined data format.
Of course, I'm leaving out a lot of detail here, like what do you do on overflow?
@clacke @Canageek @mdhughes @rafial Having a reference implementation just lets you defer to the reference implementation as your spec, and if it's on a well known platform then it can be reasonably emulated on different hardware.
When you think about the reference implementation as a quasi-spec, then it becomes clear that most mainstream languages already have a reference implementation, and thus one of these quasi-specs already.
Just because we can theoretically re-implement Python 2.5.1 as it would run on a 64-bit x86 on your future 128-bit RISC-V processor doesn't mean that you would want to
You just want to see the results, and you don't want them to differ, say because of the 64-bit vs 128-bit difference
A standard platform facilitates this
Plus, C and Fortran used to be high level back in the day. All of these languages are portable across computer architectures
Python has been changing rapidly because it has been transforming into a better form for the long haul, and Julia's changes over the last few releases have been much less disruptive. They'll settle down
@urusan @clacke @mdhughes @rafial I've been tempted to stop teaching myself Python and learn something more stable like Lua instead but everyone else is using python, but it gets more painful to use every year.
I used to just download an exe of Pymol and run an installer and now I need to use some garbage called pip and heaven help you if you use the wrong set of install instructions or run pip instead of pip3 or vis versa.
Then there is the crystallography software that hasn't updated its install instructions since 1999 and you have to manually add a bunch of stuff to the PATH, and manually tell it where your webbrowser, Pov-ray, Infranview and text editor executables are, but I'm confident it will still work next year.