Possible Ways to Extend Browsers
We already do "extend" browsers with things like "external viewers." But there's not a very good integration with the browser. Ideally those external viewers should be rendering in-place inside the document, and be working together with the browser, be tightly integrated with the browser and other parts...
So, a solution is what's been touted under with name "component software". The basic idea is that, rather than buildig one single monolithic application that does everythig from day one, we should be building a framework or architecture that ican be dynamic in its ability to have functionality added or deleted on the fly.
I can't tell how much of the problem is technical and how much is political.
Like, component based applications are in a sense profoundly anti-corporate-capitalist in that nobody can reasonably expect to control the 'look and feel' of a user's actual experience or cockblock their competition on the user's machine.
The unix shell does composability between totally independently developed things well, on the other hand.
@a_breakin_glass @eliotberriot @dredmorbius @natecull
The alto didn't have these limits & is older than all of them. So, I don't think they are the product of lack of technical sophistication. I think in environments like Windows and Mac OS it was determined that app publishers would like total control, and despite ill-fated / nuked / half-hearted efforts to make sane interoperability possible, nobody does because capitalism.
@enkiv2 @natecull @dredmorbius @eliotberriot the fact that, again, people get monoliths kind of works against this. if your app isn't designed for interop, and perhaps even if it is, the exposed function is little better than an API. if your app is a monolithic paint program, say, it's probably not designed for OTHER image editing programs to call it for services, even if the capability to do that is in the operating system.
@enkiv2 @eliotberriot @dredmorbius @natecull it's sort of self-perpetuating. people get monoliths, whether due to intentional choice or accidents of architecture, they come to expect monoliths, hence monoliths become normalised, thus reducing the incentive to implement extensibility, rinse and repeat
@enkiv2 @eliotberriot @dredmorbius @natecull honestly, maybe the whole "monolith design" thing is due to the limits of PCs prior to preemptive multitasking becoming common place; interop and non-monoliths in general doesn't look like a good idea when you've got to wait for another process to actually release control, and it could hang or otherwise screw up during that time.
What are the limits?
How did / do shell tools fail in them?
What if there were, I don't know, some lightweight application-wrapper that could be tossed around shell tools?
Or does that fail to address concerns of the GUI space?
(I've never designed GUI tools, and always found the space vaguely mystifying.)
@dredmorbius @a_breakin_glass @eliotberriot @natecull
GUI toolkits aren't really designed around composability. combining bits of one widget with bits of another means rewriting both, usually. widgets can't overlap, except special frame-type ones. each window is its own little domain. applications don't by default have the ability to embed each other's components. a non-technical end user certainly can't mash up two apps.
Can you think of any exceptions?
I've been watching what Bret Victor's been doing of late. Interesting stuff, though I think it may be more limited than Bret (or his fans) would like to believe.
@natecull would be one of the latter ;-)
Just for speculation, would a GUI widget be more composable if it were seen as a Unix-like pipe/filter mapping a stream of input events (with initial/accumulating state) to output events?
(Doing so would require a framework where an 'event' or a 'message' is itself an object; which I think Smalltalk can do but not many other OOP systems)
Or can GUI widgets fundamentally not be implemented as a stream/filter model at all?
I guess one difference is that a GUI widget would be a filter over at least TWO I/O streams, bound together.
Eg, a file or database view:
* input 1: updates or refresh events from database
* input 2: keyboard or mouse user control events
* output 1: updates or view selections to database
* output 2: graphical redraws to canvas, or events to lower-level component widgets
And state still needs to be allocated/deallocated for caching, etc.
This is where I'd *like* to think that something like functional reactive programming can simplify the usual GUI-widget object-and-callback mess. But I'm not sure the dust has settled from the various FRP paradigms figuring out what works, yet.
a widget is any kind of object in a GUI system that has a graphical state.
Part of the complexity of GUIs is that you have to 'hook up' your widgets to receive events from others. A bit like lots of wires in a circuit board. Very brittle, hard to repurpose.
@natecull @eliotberriot @a_breakin_glass @enkiv2 That was pretty much what I was thinking. A Unix filter is entirely source-agnostic. Feed it to /dev/stdin, or read it from /dev/sdtout, and you're good.
That's opposed to various system or library calls which have specific structures and datatypes that must be met.
Is there a way to bridge that divide? To make widgets that act as pipes or filters?
Check the repo for my recent edits, just pushed now.
I'd like docfs to work from any shell (including remote, no GUI, etc.). But for the full flavour, you'd want a terminal that can incorporate widgets, basically. How much of a good or bad idea /that/ is I'm not entirely sure.
But, see the tectronics term.
What if the terminal could sprout /at least a sufficiently capable GUI viewer/?
A terminal that could speak a subset of semantic HTML, present a document, format the general structure, offer navigation, multiple fonts, at least static graphics. Document-viewing-in-terminal.
Or something like that.
@enkiv2 @a_breakin_glass @eliotberriot @natecull One question is whether or not that's a shell with browser capabilities or a browser with shell capabilities, and I suppose it depends on which end of the gun you're looking at. The question may be moot, or not.
In this talk of widgets and knowing parters / callers / etc., what if you /don't/ need all that, and can just stick stdin, stdout, and stderr on things (or maybe "datain" or something like that, reserving stdin for command stream)?
@enkiv2 @a_breakin_glass @eliotberriot @natecull And there's enough /standard/ markup for the terminal to look at some document and say "OK, so this is how I've got to deal with this crud", and render appropriately.
Not render as in "this was what the author intended", but as in "this is good enough for the reader, here and now".
And then to figure out what the fuck to do with headers, footers, asides, links, embeds, A/V, citations, footnotes, tables, comments, trees, lists, etc.
Recognising that the "terminal" might be a TTS output / voice input. Or a touchscreen. a 4 cm display, or 200 cm display. Colour. eInk. Touch. Flat. Monochrome.
Anyhow: how do you build a CLI-GUI integrated concept around all of that.
I think Tcl/Tk widgets take a somewhat similar approach?
I don't really know, I've never used Tk, but, I think they send text strings around.
And the whole Smalltalk-era MVC thing suggests we should cleanly separate model/view/control, but in practice we almost never do.
I feel like 'pipes of events' SHOULD be a good paradigm, but maybe needs a lot of 'tee' components... eg, a publish/subscribe protocol.
and, eg, even Unix pipes have a lot of components that don't strictly fit the pipe model. So there's some fundamental work that hasn't been done if we need pipes to be a first class model for all-purpose data communication.
Eg: 'cat filename' puts a file's lines ON the pipe, but doesn't use a pipe to GET that data. It's not clear that there's a good pipeful mechanism for 'get a chunk of data', but a lot of GUI stuff is selecting data.
So far the interface that's best managed to combine CLI and GUI interfaces is the 'file directory'.
I can take a directory, and I know it always has roughly similar semantics:
* it has a text path
* it's a list of items
* items have names, properties and contents
* A given window (text or GUI ) maps to a path and shows the contents
* I can 'navigate' around either mode
Can we use 'objects' instead of 'directories' and generalise this?
We would need, perhaps, a way of creating a 'virtual directory' where some 'files' were links to other places in the tree.
Those links (symlinks in a filesystem, or object references in an objectsystem) might be functions, perhaps.
Just this one concept might get us universal pure-functional computation...?
Like what if 'clicking a button' literally transported you to another 'place in the object tree'?
Could that cover all interaction?
but even if we couldn't get to full interaction (and maybe we can though)
wouldn't it be, say, AWESOME if, eg your mail client appeared as 'just a directory' and you could then
In PowerShell, *you can already do this with some datatypes*. You can explore the Registry and WMI Database and Active Directory as if they were a tree of file-like 'places'
What if windows - and sub-window widgets - were directories?
It's not a virtual-filesystem based mail access, entirely, though it's got elements of that. Jerry Peek is a major fan.
Console email clients such as mutt or pine also get pretty close. A problem is that there's so much shitty (content-wise) email kicked around. Straight text is massively easier to deal with.
Restructuring crap to text might help, also just saying "fuck you, no" to ugly shit.
Another is to have an application for talking to directories and files. Midnight Commander (based of Norton Commander IIRC) is an example. It _also_ has a bunch of virtual filesystems -- various archive and packaging formats, FTP, SSH, and more.
@natecull @eliotberriot @a_breakin_glass @enkiv2 This is where rather than fucking around with symlinks, we just kind of separate /data storage/ from /data access/. The "filesystem" is a route to viewing file-shaped shit. As with /proc or /sys.
Where, say, in /docfs, there actually /are/ serialised representations of text (or audio, images, video, etc.) those /do/ have a specific storage spot. (I've been putting my "books" in the "/stacks" conceptually). But you don't go there generally.
Control in (user interface, "stdin")
Doc / data in (dataflow)
Status out (system response, "stdout")
Display out (visual)
Doc / data out (dataflow)
Errors out (system response, "stderr")
So that's three new pipelines?
What about talking to whatever the feeder / consumer of information is as well. In-band or out-of-band? Two more pipes?
@natecull @dredmorbius @a_breakin_glass @eliotberriot
in the context of tk, 'command' means arbitrary TCL code. (any widget can bind any code to the "button1-up" event -- i.e., click and release -- but button has a default binding that shows it as pressed and released and then calls a function the programmer has provided)
tk is pretty conventional as a gui toolkit, mostly notable for being lightweight.