Show newer

abstract nonsense 

wait wtf I always thought currying, in like, the CS sense, and the tensor-hom adjunction for modules or were basically the same thing. so I was going around thinking "oh yeah that means R-modules, R-algebras, etc. are all cartesian closed, cartesian closed just means there's an internal hom"

but that's not true, since the tensor product of modules is *not* the categorical product. it's even less true for algebras, where the tensor product is the *co-product* of all things. this is fucking with me, because these cases are *the* example of "internal hom" in my head

in either of these cases, does the product actually have a right adjoint? I don't think so... In the case of vector spaces, definitely not, since the dimension would be fucky...

thinking of this because there's literally a haskell module called `Control.Monad.Trans.Identity`

Show thread

defining MonadTrans instance so I can lift computations from my old gender to my new one

type of guy who pronounces SQL as "squeal"

typedef int16_t jort; // java short

I think this could also lead to a solution to the "template Haskell breaks cross-compilation" problem.

Show thread

I really need to properly learn types and PL things at some point.

Lately I've been thinking about how in Haskell, Arrows and Monads can represent computations *more powerful* than `->`, but I think there could be better support for *less powerful* computations. Like computations where you are restricted to use constant time or memory, or where you can only use "compile-time-safe" operations (no FFI), or where the computation "runs" outside of Haskell (like in a shader, a SQL query, or on a microcontroller), and the Haskell program basically acts as a compiler.

I forget where but I remember some book on logic saying "the only difference between a finitist and a formalist is their attitude" and I think that describes how I feel about infinite stuff. Like, sure, only finite stuff can "really exist". But you can write axioms, deduction rules, theorems and proofs about infinite things using finitely many symbols, and I think asking which of those finite strings form valid proofs is interesting. (This also applies to ultra-finite stuff. Like sure, nothing bigger than 2^100 "exists". But any idea from any type of mathematics that anyone will ever have can be expressed in fewer symbols)

luke-warm software take 

Writing a library that takes the "wrap everything" approach is very painful, since you inevitably have to expose lower-layer options to the user anyway. Trying to use such a library is also painful, since inevitably you want to turn some knob that isn't exposed to you, and you have to hack around it.

Show thread

luke-warm software take 

People often build software abstractions as boxes nested within boxes, where each layer completely encapsulates the one beneath it. Unless the lower layer is *truly* something the user will never care about, or something they can't afford to care about (e.g. to maintain portability), this is a mistake. It's better to build abstractions as if they have "two ports": one to plug into the layer beneath it and one for above it.

The obvious example is anything network-related. An HTTP client shouldn't wrap a transport implementation, it should just let the user provide one. Now if the user has specific needs with respect to TLS or socket options, they don't depend on the HTTP library to expose them. If you build a REST API on top of that, let the user provide the HTTP client. Now you don't need to worry about connection pools or proxies, but a user with specific needs can.

tl;dr dependency injection saves everyone a lot of work in the long run, and people should do it more

(To be fair, I don't know much about it beyond where it shows up in pl, so it was basically like "imagine if in your proofs there were limits on how many times you could use each hypothesis")

Show thread

My friend (a mathematician) always says that mathematicians see logicians the way normal people see mathematicians. I tried to explain what I knew about linear logic to him and he was like "that sounds painful."

Every fp feature of rust just feels so clunky.

Show thread

Ugh I just found a situation in rust where replacing `|x| f(x)` with `f` breaks type-checking.

ok, at the very least compiling my own 5.13 kernel fixed the modesetting issues.

still getting the same mce every time I boot, and still getting terrible raid performance. time to set up a write-back cache to hopefully fix that, and then figure out if the mce is actually causing any problems. if it's still bad then I guess rethink using raid 5 (this is storing data that can be obtained on the internet if I lose it, I just think it would be a pain to lose everything if a single drive died, and I don't want to have to think about multiple logical volumes, so raid5 seemed like a good compromise)

this is what I get for using hardware released in the last year.

ahhh apparently my motherboard does not support csm without a discrete graphics card, so I can't run memtest86+.

I guess I'll just bite the bullet and run the "freeware" PassMark (tm) memtest86 since it seems to be the only one that support uefi.

unrelated, but I'm also getting a random mce, my ethernet card wasn't supported on buster so I had to use bullseye, and for some reason I need to disable modesetting just to get to a console, so today is just not a good day for computers.

Show thread

this isn't based on any benchmarks, just like, using apt right after a clean install of debian is taking a loooong time

Show thread

wow either I have things configured wrong, or software raid 5 through md is just super slow. I'll look into using an ssd as a write-back cache via mdadm, maybe that will help?

Show older
Cybrespace

cybrespace: the social hub of the information superhighway jack in to the mastodon fediverse today and surf the dataflow through our cybrepunk, slightly glitchy web portal support us on patreon or liberapay!