has reached the end-of-life and is now read-only. Please see the EOL announcement for details

> Because the id of a favourite is not public, an HTTP Link header can be parsed for next and previous pages.

What the fuck

Seriously. What the FUCK

@ultranova all of the mastodon APIs that return big lists of things include pagination info in the Link header

it's odd that you have to use it in that case, but it is at least consistent

screenreader unfriendly due to JSON sample 

@shadowfacts I'm confused about why they couldn't just wrap the result in more JSON

Like, all of the APIs I've worked with before included pagination data in the form of:

{ "page": 2, "content": [ ... ] }


{ "prev": "<id>", "next": "<id>", "length": 22, "content": [ ... ] }

because anything else was considered *extremely* poor API design?

@shadowfacts There are a bunch of different ways to do it but the point is that if you're using JSON already, there's no reason to not package that data in with it. Including actionable data in a HTTP header that you have to *parse* out the contents of, while also using JSON to transmit data, is terrible.

@shadowfacts Another aspect of it is the fact that a Link header is unstandardized.

Yes, the JSON data version also is not standardized, but it's like 20x easier accessing it because you're *already* accessing JSON data. Using a link header means the client (i.e. me) and the server now both have to write a parser for that, and hope that there are no unusual quirks to the hundreds of different mastodon servers and clients out there who each have their own parser.

@ultranova I agree that it's poor API design, but I don't think it's that bad. It's a simple enough format, and you write one helper function that handles making requests and parse it there

@shadowfacts Well, that's an icky way to do it -- easier to have a function to parse it and separate "getting" from "parsing+processing"

@shadowfacts Currently we have a function like, get_bookmarks which:

- GETs from the API
- Parses out the link headers and dumps them into a dict with 'content' and 'links' entries, the caller can choose to discard that information if they want, which is really how it should have been done in the first place

@ultranova that works, but what I do is have a single function that takes a request object and gives back a tuple of the decoded result and pagination data.

that way the functions for individual api requests don't need to care about it and just delegate to the one central request function

@shadowfacts I've learned the hard way that DRY works, up to a point. You can prematurely abstract and end up voding yourself in loops. I don't see a point of semantically separating the point at which the request is made, but I do see a point of separating parsing and verification from request processing.

The actual request is just one line, optimizing it further would make it more weird in cases when we need to rely on different data. Also already does this and that one of the many reasons that it's code is just really, really poor.

@ultranova that's fair. I guess it depends on the context. in Swift, actually making the request and decoding the JSON response is a good deal more than 1 line, so adding a little bit extra for pagination doesn't change much

@shadowfacts Oh yeah like, json decoding with Python is:

r = requests.get(blah)
response = r.json()
link_header = r.links()

The link header parsing actually takes up two extra functions of about 4 - 6 lines each I think

Sign in to participate in the conversation

the mastodon instance at is retired

see the end-of-life plan for details: