Random insight of the night: every couple years, someone stands up and bemoans the fact that programming is still primarily done through the medium of text. And surely with all the power of modern graphical systems there must be a better way. But consider:
* the most powerful tool we have as humans for handling abstract concepts is language
* our brains have several hundred millenia of optimizations for processing language
* we have about 5 millenia of experimenting with ways to represent language outside our heads, using media (paper, parchment, clay, cave walls) that don't prejudice any particular form of representation at least in two dimensions
* the most wildly successful and enduring scheme we have stuck with over all that time is linear strings of symbols. Which is text.
So it is no great surprise that text is well adapted to our latest adventure in encoding and manipulating abstract concepts.
@rafial Both accurate and also misses the fact that Excel is REGULARLY misused for scientific calculations and near-programming level things since its GUI is so intuitive for doing math on things.
Like, GUI programming is HERE, we just don't want to admit it due to how embarrassing it is.
@Canageek very good point. Excel is actually the most widely used programming environment by far.
@rafial Now what we need to do is make a cheap, easy to use version of it that is designed for what scientists are using it for it. Column labels, semantic labels, faster calculations, better dealing with mid-sized data (tens of thousands of data point range), etc
@Canageek I'm wondering, given your professional leanings if you can comment on the use of "notebook" style programming systems such as Jupyter and of course Mathematica. Do you have experience with those? And if so how do they address those needs?
Thanks @urusan, I found the article interesting, and it touched on the issue how to balance the coherence of a centrally designed tool with the need for something open, inspectable, non-gatekept, and universally accessible.
PDF started its life tied to what was once a very expensive, proprietary tool set. The outside implementations that @Canageek refers to were crucial in it becoming a universally accepted format.
I think the core idea of the computational notebook is a strong one. The question for me remains if we can arrive at a point where a notebook created 5, 10, 20 or more years ago can still be read and executed without resorting to software archeology. Even old PDFs sometimes break when viewed through new apps.
@Canageek @rafial You aren't processing those ShelX files on any sort of hardware (or software binaries) that existed in the late 1960's. At best, you're running the original code in an emulation of the original hardware, but you are probably running it on modern software designed to run on modern hardware
Software archeology is inevitable and even desirable
What we want is an open platform maintained by software archeology experts that lets users not sweat the details
@urusan @rafial No, they've kept updating the software since then so it can use the same input files and data files. I'm reprocessing the data using the newest version of the software using the same list of reflections that was measured using optical data from wayyyy back.
The code has been through two major rewrites in that time, so I don't know how much of the original Fortran is the same, but it doesn't matter? I'm doing the calculations on the same raw data as was measured in the 60s.
There is rarely a POINT to doing so rather then growing a new crystal but I know someone that has done it (he used Crystals rather then Shelx, but he could do that as the modern input file converter works on old data just fine)
@Canageek @rafial We're talking about 2 different things here. Of course data from over half a century ago is still useful.
The thing that's hard to keep running decades later is the code, and code is becoming more and more relevant in many areas of science.
Keeping old code alive so it can produce consistent results for future researchers is a specialized job
Ignoring the issue isn't going to stop researchers from using and publishing code, so it's best to have norms
@urusan @Canageek one other thing to keep in mind is that data formats are in some ways only relevant if there is code that consumes it. Even with a standard, at the end of the day a valid PDF document is by de-facto definition, one that can be rendered by extent software. Similar with ShelX scripts. To keep the data alive, one must also keep the code alive.
@mdhughes @rafial @urusan I mean, that is why Shelx first major version came out in 1965 and the most recent one in 2013 (last minor revision was 2018)
I mean, modern versions of Fortran aren't any harder to write them C, which is still one of the most used programming languages in the planet, I don't see why everyone makes fun of it.
> SHELX is developed by George M. Sheldrick since the late 1960s. Important releases are SHELX76 and SHELX97. It is still developed but releases are usually after ten years of testing.
This is amazing.@clacke @mdhughes @urusan @rafial yeah, the big worry is that George Sheldrick is getting very, very old and there are wonders if anyone will take over maintaining and improving the software when he dies. luckily it's largest competitor does have two people working on it the original author and a younger professor so it has a clear succession path.
@Canageek @rafial @urusan I'm kind of not making fun of Fortran, though the last time I saw any in production it was still F-77, because F-90 changed something they relied on and was too slow; I last worked on some F-77 for the same reason ~30 years ago.
I am indeed making fun of COBOL, but it'll outlive us by thousands of years as well.
Stable languages are good… but also fossilize practices that we've improved on slightly in the many decades since.