We need to have the courage to imagine what it looks like to invest in new technology stacks.
HTTP is extremely complex. Most of that complexity is not used most of the time. That which is could be better defined and encapsulated.
HTTP/2 was, of course, an attempt to do this, and succeeds in some ways, but it was fundamentally created for the purpose of making things easier for existing browser vendors.
Imagine a world where a large number of devices run on RISCV variants, running microkernels which implement lightweight hypertext protocols and bespoke protocols based on lightweight frameworks that can be customized to be fit for use.
Even just the energy savings of such a setup justifies almost any amount of effort to get there.
Why can't we, with a _little_ extra hardware, have a working web browser on a ZX-80?
Well, the answer is we can, but the web is such a disaster that without far more RAM than the CPU supports, it's not useful.
Of course there's nothing wrong with specialization. CPU design is hard, and people train their whole lives to understand and optimize that process. But, at the same time, why on earth do our CPUs have thousand-page manuals?
I seriously doubt more than one or two people in the world have a good idea of the complete semantics of vector operations on X86_64, for instance. I certainly don't.
But if we can't have confidence in the semantics of our hardware, how on earth can we expect to build software that does what we want - let alone what our _customers_ want?
@tindall Much as I appreciate a simple, elegant design, I can't help but think that in the end, elegance and correctness just don't matter much to users. For example, relatively few of us write either assembly or compiler backends. It hardly matters day-to-day whether the output is x86 or RISC-V, unless you're an unusual person who tries to do or at least understand everything yourself.
It might matter for security bugs, but by then it's far too late, several generations of hardware have already been designed, sold, and abandoned.
It seems like the place to make a difference is at the top of the stack, where careful UI design could help users achieve their goals better.
@skybrian maybe - but users (and, to be clear, we're all users of the ISA, the chip, and the OS, at the very least) do care about price, performance, and battery life, and downstack decisions can have a huge impact on those.
@tindall tbh i think it's only because intel have to justify cramming a billion transistors onto a chip *somehow*, and the limits of making the existing ISA faster were hit a long time ago :-/
@tindall I got myself a Spectrum Next and I've been toying with code to build a web browser on it.
And you are right - the main problem is memory management. Most web pages will have to be parsed in chunks, since all of the HTML can never exist in memory at the same time. That's a bit of a head scratcher. 😅
@wauz @tindall That's really what I'm aiming for. But on a Z80 with 16K memory blocks addressable at any given time, the problem is that even just to read a "normal" web page into memory to parse out the text is tricky. I'll have to load chunks of the HTML, then strip out the text with minimal formatting information and then figure out how to display that Lynx-like on the screen.
I have some parts of that in Z80 code, but a lot of it still needs to get done.
@tindall because the hardware you put into a zx80 to make that practical would be orders of magnitude more powerful than the zx80, reducing it to a really bad terminal at best.
They were terrible machines in almost every way except cost. I get where you're coming from, but the example is a spectacular stretch.
@stevelord no, no, I'm with you - I'm not saying we should have that, just that we need to stare deeply into the why. Is it because our protocols are complex? Partly. Is it because our software is bloated? Partly. What else is there? And why?
Such examination can be very instructive.
@tindall it's also planned obsolescence and designing to announce new features even if they work against the user. On top of that, changes in environmental are shrinking life expectancies due to material changes and ever smaller manufacturing sizes. To what end though? So we can run electron apps 5% less slowly?
Somewhere along the way hardware and software forgot the best jazz is as much about the notes you don't play as those you do.
@icedquinn oh, absolutely! He was a slimy businessman through and through - he charged just a tiny bit of margin, and immediately sold 50,000 units. Then he sold 1.5 million ZX-81s and another 5 million Spectrums. It's a great way to make money.
@icedquinn in some ways, that's what makes me so angry about this whole situation. If the people running the show weren't so addicted to control and amassing piles of money they can't do anything with, they could still make enough to be very wealthy and leave plenty of room for real openness and innovation.
I've been working on and off on building a VGA card for my RC2014, and I will probably crash and burn because I'm not a hardware engineer, but it's been a ton of fun. And the interface on both sides is ridiculously simple and easy to implement. Contrast PCIe and HDMI...
@tindall @craigmaloney The pursuit of performance at the cost of simplicity has created this huge ivory tower divide between the low end of the stack and everyone else. And climing that tower of knowledge is possible and incredibly rewarding, but systems should be simpler and resources need to be more available. A lot of this stuff isn't even as complex as it seems.
Oh, and locking essential standards behind paywalls and crap needs to stop too.
@tindall In fact, profound complexity creates a huge moat around software which intends to be open to open source participation. Firefox OS suffered from this, as does Firefox itself, along with projects like ubports. Most enthusiasts--even those with a lot of SW experience--are hard pressed to climb the mountains of complexity (usually barely documented) to understand enough about the system to contribute meaningful changes.
Unbounded complexity might be the defining challenge in SW.
@tindall I don't think HTTP/2 really fixed anything. It made a lot of things worse actually.
You can read the HTTP/1.1 RFC and create a conforming implementation from scratch in an extremely short time. It won't support every feature, but it will support enough features to be immediately practical and useful. And if it malfunctions, you can debug it with a copy of netcat, no specialized tools required.
HTTP/2 is dramatically more complex and much more opaque.
Cybrespace is an instance of Mastodon, a social network based on open web protocols and free, open-source software. It is decentralized like e-mail.