cybre.space has reached the end-of-life and is now read-only. Please see the EOL announcement for details

> Because the id of a favourite is not public, an HTTP Link header can be parsed for next and previous pages.

What the fuck

Seriously. What the FUCK

@ultranova all of the mastodon APIs that return big lists of things include pagination info in the Link header

it's odd that you have to use it in that case, but it is at least consistent

screenreader unfriendly due to JSON sample 

@shadowfacts I'm confused about why they couldn't just wrap the result in more JSON

Like, all of the APIs I've worked with before included pagination data in the form of:

{ "page": 2, "content": [ ... ] }

or

{ "prev": "<id>", "next": "<id>", "length": 22, "content": [ ... ] }

because anything else was considered *extremely* poor API design?

Follow

@shadowfacts There are a bunch of different ways to do it but the point is that if you're using JSON already, there's no reason to not package that data in with it. Including actionable data in a HTTP header that you have to *parse* out the contents of, while also using JSON to transmit data, is terrible.

· · Web · 1 · 0 · 0

@shadowfacts Another aspect of it is the fact that a Link header is unstandardized.

Yes, the JSON data version also is not standardized, but it's like 20x easier accessing it because you're *already* accessing JSON data. Using a link header means the client (i.e. me) and the server now both have to write a parser for that, and hope that there are no unusual quirks to the hundreds of different mastodon servers and clients out there who each have their own parser.

@ultranova I agree that it's poor API design, but I don't think it's that bad. It's a simple enough format, and you write one helper function that handles making requests and parse it there

@shadowfacts Well, that's an icky way to do it -- easier to have a function to parse it and separate "getting" from "parsing+processing"

@shadowfacts Currently we have a function like, get_bookmarks which:

- GETs from the API
- Parses out the link headers and dumps them into a dict with 'content' and 'links' entries, the caller can choose to discard that information if they want, which is really how it should have been done in the first place

@ultranova that works, but what I do is have a single function that takes a request object and gives back a tuple of the decoded result and pagination data.

that way the functions for individual api requests don't need to care about it and just delegate to the one central request function

@shadowfacts I've learned the hard way that DRY works, up to a point. You can prematurely abstract and end up voding yourself in loops. I don't see a point of semantically separating the point at which the request is made, but I do see a point of separating parsing and verification from request processing.

The actual request is just one line, optimizing it further would make it more weird in cases when we need to rely on different data. Also Mastodon.py already does this and that one of the many reasons that it's code is just really, really poor.

@shadowfacts Because after realising "Oh, we could put pagination parsing and request dispatch in the same function and make the API function call that", you realise that the thing that makes up most of your code space after that is the information construction -- making dicts from arguments.

So the step then is how to do it automagically -- after all, why draw the line here, we want optimal code size!? Well, Mastodon.py decided to (ab)use locals() to magically construct arguments.

I'm sure this worked *at first*, but then they had to deal with some unexpected problems. And because of the layers of abstraction involved, the workarounds for those problems make up most of the actual code space. When you look up how a request is handled, you can't see how it is handled, the layers of abstraction cause you to jump to 50 different places each one meaning more mental space taken up and more tabspace in the browser

@shadowfacts Because of all of this, the code blisteringly obtuse to read and understand what it is doing, and I've spotted multiple bugs in the Mastodon.py code that I just do not have the energy to chase down, verify, and file.

And what is really, really funny is that the amount of lines taken up in each requests function to preprocess the arguments to try and mangle them into the locals(), and then post-process the unneeded locals(), is all larger than it takes to write a dict, and involved like 4 levels of indirection that when reading it for the first time you have to jump around to understand.

@shadowfacts A good chunk of the code in some places is actually unnecessary, too, because requests omits fields that are None from a request. So like 90% of the work it is doing in the body of most requests, not only relies on Python Magic(tm), but is also utterly unnecessary

@shadowfacts I guess what I'm monologuing about is that there needs to be a line drawn around abstraction before it gets out of control, and that line is very personal but needs to strike a balance between simplicity and also comprehension-to-new-eyes. And the best place IMO to draw that line is where the tasks become separate tasks?

Sorry foe the rambling, it's almost 4am and my brain is all over the place lol

@ultranova that's fair. I guess it depends on the context. in Swift, actually making the request and decoding the JSON response is a good deal more than 1 line, so adding a little bit extra for pagination doesn't change much

@shadowfacts Oh yeah like, json decoding with Python is:

r = requests.get(blah)
response = r.json()
link_header = r.links()

The link header parsing actually takes up two extra functions of about 4 - 6 lines each I think

Sign in to participate in the conversation
Cybrespace

the mastodon instance at cybre.space is retired

see the end-of-life plan for details: https://cybre.space/~chr/cybre-space-eol