someone seriously needs to make something like spacemacs but instead it's emacs and acme
preferably some convetions and packages built around acme instead of modules to make emacs more acme-like
@grainloom Given the work people have put into getting elisp to work in Guile, it might be possible to make a bunch of Emacs stuff work in Acme with appropriate wrappers.
@freakazoid tbh i'd rather have 9p based editor modules that could work independent of acme, instead of inheriting emacs's "everything runs in the same single threaded interpreter" thingy
@freakazoid yknow, loose coupling instead of all the hackery emacs modules can do
once that's working, yeah, sure, "emacsfs" might be a good idea
@grainloom Yeah, it's just a potential way to bootstrap.
Though there are some benefits to that promiscuous mixing, too. Like the ability for Helm to hook into anything. Even if something isn't designed to hook in, you can just redefine specific functions. For a loosely coupled design it's likely you'd only be able to hook in at intended hook points, which are never adequate.
@freakazoid idk, I'm trying to avoid the "editor is the os" paradigm
why not use something like the plumber for looking things up?
@grainloom Maybe. Part of my issue is the fact that processes on modern CPUs are very coarse-grained compared to coroutines, Erlang processes, or Lisp functions. I'm not a huge fan of Emacs as an OS either, but the actual OSes people use, and even Plan 9, are not up to the task IMO. I think we need something more like Oberon.
@grainloom I'm only referring to their granularity and the ability to have millions of them on a system. Can't do that with MMU-separated processes or even OS-level threads.
@grainloom As for use case, I'm thinking of all the little functions people stick in their .emacs file. AIUI these all become RC scripts with Acme. And then if you want any state, it needs to go into a file. At every step there's a certain amount of overhead: fork and exec rc, filesystem blocksize, IPC over some pipe-like thing using plain text which requires some kind of meaning imposed on it.
To me the move toward JSON is just validation that s-expressions were the right answer all along.
@freakazoid hmmm, i see.
but what iiiif... we keep the processes and have two ways of customizing things:
- fs trickery like in Plan 9 (replacing things in /bin, mounting file servers)
- source level modifications and custom executables
@freakazoid so instead of scripting, you can have:
- well defined in-process APIs
- source patching instead of dynamic monkey patching
it's kinda like scripting, except you do it at compile time
this way the medium-large-ish subsystems can't mess with each other in memory but within those subsystems you can have all kinds of shenanigans
@freakazoid or you can even script the subsystems
but the convention would be to keep things modular
so eg. the autocompletion module can't mess with the ssh integration's internals. if you want widely unrelated things to interact, you do IPC.
@grainloom IMO the right way to do this is with the object-capability model. In general it's very hard to tell in advance what things should go together and what shouldn't except at a very coarse level. And then you're putting things together that still shouldn't be able to mess with one another's internals. For example autocompletion code shouldn't be able to mess with syntax highlighting internals either, but you'd probably put them in the same process.
@grainloom Yes! And now you understand why I stick with Emacs for the time being. I see Acme as being "purer" but also a representation of an intermediate state of things I consider to be at least as broken as Emacs but requiring a bunch of effort to get there that's not necessary because for now I have Emacs.
@grainloom I'd settle for a capability-based VM (in the JVM sense). We need to demonstrate fine-grained isolation of code in software before it'll make sense to try doing it in hardware.
@freakazoid the concept is pretty fuzzy but that's kind of the gist of it:
instead of giving everything in the system access to everything else, you give them an upper limit to how far they can reach.
@epicmorphism @freakazoid AFAIK in any language where you can let the runtime/libraries do the bounds checking for you, you can make your stack as small as you want, because you don't rely on the MMU to enforce memory safety (which also means the unsafe parts of the language can still mess with arbitrary memory location) but with an MMU you can only increment the stack size by multiples of the page size.
@epicmorphism @grainloom Goroutine stacks are small because Go uses runtime stack checking and starts with a very small stack by default. I think it grows the stack by fragmenting it, though, not by moving it.
Activation records are the same as stack frames in a stack-based language. And yes, it can be a bit less efficient to allocate them on the heap from a performance standpoint depending on your allocator, but the result is that continuations and coroutines are virtually costless in Scheme.
@grainloom @epicmorphism I tried to see if it would even be possible to develop an OS on a currently available architecture that would support millions of processes, but between the 4k page size, the need for a kernel and process stack (kernel stack can probably be avoided), and the size of the pagetable itself, which is bare minimum 3 pages, it's something like 32k minimum per process. By way of comparison an Erlang process or a Python coroutine are each about 1k.
ｃｙｂｒｅｓｐａｃｅ: the social hub of the information superhighway
jack in to the mastodon fediverse today and surf the dataflow through our cybrepunk, slightly glitchy web portal