someone seriously needs to make something like spacemacs but instead it's emacs and acme
preferably some convetions and packages built around acme instead of modules to make emacs more acme-like

Β· brutaldon Β· 3 Β· 2 Β· 2

also acme's windowing needs work but so does emacs's

....maybe i should finally try sam???

@grainloom Given the work people have put into getting elisp to work in Guile, it might be possible to make a bunch of Emacs stuff work in Acme with appropriate wrappers.

@freakazoid tbh i'd rather have 9p based editor modules that could work independent of acme, instead of inheriting emacs's "everything runs in the same single threaded interpreter" thingy

@freakazoid yknow, loose coupling instead of all the hackery emacs modules can do
once that's working, yeah, sure, "emacsfs" might be a good idea

@grainloom Yeah, it's just a potential way to bootstrap.

Though there are some benefits to that promiscuous mixing, too. Like the ability for Helm to hook into anything. Even if something isn't designed to hook in, you can just redefine specific functions. For a loosely coupled design it's likely you'd only be able to hook in at intended hook points, which are never adequate.

@freakazoid idk, I'm trying to avoid the "editor is the os" paradigm
why not use something like the plumber for looking things up?

@grainloom Maybe. Part of my issue is the fact that processes on modern CPUs are very coarse-grained compared to coroutines, Erlang processes, or Lisp functions. I'm not a huge fan of Emacs as an OS either, but the actual OSes people use, and even Plan 9, are not up to the task IMO. I think we need something more like Oberon.

@grainloom I'm only referring to their granularity and the ability to have millions of them on a system. Can't do that with MMU-separated processes or even OS-level threads.

@grainloom As for use case, I'm thinking of all the little functions people stick in their .emacs file. AIUI these all become RC scripts with Acme. And then if you want any state, it needs to go into a file. At every step there's a certain amount of overhead: fork and exec rc, filesystem blocksize, IPC over some pipe-like thing using plain text which requires some kind of meaning imposed on it.

To me the move toward JSON is just validation that s-expressions were the right answer all along.

@freakazoid hmmm, i see.
but what iiiif... we keep the processes and have two ways of customizing things:
- fs trickery like in Plan 9 (replacing things in /bin, mounting file servers)
- source level modifications and custom executables

@freakazoid so instead of scripting, you can have:
- well defined in-process APIs
- source patching instead of dynamic monkey patching

it's kinda like scripting, except you do it at compile time

this way the medium-large-ish subsystems can't mess with each other in memory but within those subsystems you can have all kinds of shenanigans

@freakazoid or you can even script the subsystems

but the convention would be to keep things modular

so eg. the autocompletion module can't mess with the ssh integration's internals. if you want widely unrelated things to interact, you do IPC.

@grainloom IMO the right way to do this is with the object-capability model. In general it's very hard to tell in advance what things should go together and what shouldn't except at a very coarse level. And then you're putting things together that still shouldn't be able to mess with one another's internals. For example autocompletion code shouldn't be able to mess with syntax highlighting internals either, but you'd probably put them in the same process.

@freakazoid wouldn't that require OS / hardware / VM / language support though?

@grainloom Yes! And now you understand why I stick with Emacs for the time being. I see Acme as being "purer" but also a representation of an intermediate state of things I consider to be at least as broken as Emacs but requiring a bunch of effort to get there that's not necessary because for now I have Emacs.

@freakazoid then again, who knows when we'll have capability based CPUs

@grainloom @freakazoid we have CPUs that can provide VM separation

and have you seen how well that works?
it takes around 1000 lines of code to launch a vm on an SGX enabled system

@grainloom I'd settle for a capability-based VM (in the JVM sense). We need to demonstrate fine-grained isolation of code in software before it'll make sense to try doing it in hardware.

Show more

@freakazoid the concept is pretty fuzzy but that's kind of the gist of it:
instead of giving everything in the system access to everything else, you give them an upper limit to how far they can reach.

@freakazoid @grainloom so what prevents OS-level threads to have the same capacity? is it the switching between kernel/user modes during rescheduling?

@epicmorphism @grainloom The size of the stack is the main issue with threads. Even if you reduce it from glibc's 2 meg default it typically still needs to be a multiple of the page size. And of course threads give you no isolation at all; you still need to get that from the language or VM.

@epicmorphism @grainloom Depends on how they're implemented. Go uses a very small stack per goroutine and has runtime stack checks IIRC. Many Scheme implementations allocate activation records from the heap, so they don't have a "stack" in the traditional sense at all.

@freakazoid @grainloom what makes it so small in Go? do they implement them with coroutines?

> Many Scheme implementations allocate activation records from the heap

Are activation records the same things as frames? If so, isn't it less efficient than allocating on the stack?

I have only a cursory understanding of some of these topics, do you know where I can learn more about it?

@epicmorphism @freakazoid AFAIK in any language where you can let the runtime/libraries do the bounds checking for you, you can make your stack as small as you want, because you don't rely on the MMU to enforce memory safety (which also means the unsafe parts of the language can still mess with arbitrary memory location) but with an MMU you can only increment the stack size by multiples of the page size.

@epicmorphism @freakazoid Go also makes it possible to reallocate a stack because it manages all pointers so if the realloc moves the stack it can just rewrite all the pointers in memory to point to the new location.

@epicmorphism @freakazoid And this is kind of speculation-y but I _think_ if you also have a compacting garbage collector you can "defragment" memory while the program is still running and thus utilize it much better than if you were using malloc/realloc/free.

@epicmorphism @grainloom Goroutine stacks are small because Go uses runtime stack checking and starts with a very small stack by default. I think it grows the stack by fragmenting it, though, not by moving it.

Activation records are the same as stack frames in a stack-based language. And yes, it can be a bit less efficient to allocate them on the heap from a performance standpoint depending on your allocator, but the result is that continuations and coroutines are virtually costless in Scheme.

@freakazoid @epicmorphism why not choose processes then? I know they have a lower memory overhead in Plan 9 but idk by how much, and seL4 and other microkernel devs claim they have fast IPC.

@grainloom @epicmorphism I tried to see if it would even be possible to develop an OS on a currently available architecture that would support millions of processes, but between the 4k page size, the need for a kernel and process stack (kernel stack can probably be avoided), and the size of the pagetable itself, which is bare minimum 3 pages, it's something like 32k minimum per process. By way of comparison an Erlang process or a Python coroutine are each about 1k.

Sign in to participate in the conversation

Cybrespace is an instance of Mastodon, a social network based on open web protocols and free, open-source software. It is decentralized like e-mail.