@shadowfacts There are a bunch of different ways to do it but the point is that if you're using JSON already, there's no reason to not package that data in with it. Including actionable data in a HTTP header that you have to *parse* out the contents of, while also using JSON to transmit data, is terrible.
@shadowfacts Well, that's an icky way to do it -- easier to have a function to parse it and separate "getting" from "parsing+processing"
@shadowfacts Currently we have a function like, get_bookmarks which:
- GETs from the API
- Parses out the link headers and dumps them into a dict with 'content' and 'links' entries, the caller can choose to discard that information if they want, which is really how it should have been done in the first place
@shadowfacts I've learned the hard way that DRY works, up to a point. You can prematurely abstract and end up voding yourself in loops. I don't see a point of semantically separating the point at which the request is made, but I do see a point of separating parsing and verification from request processing.
The actual request is just one line, optimizing it further would make it more weird in cases when we need to rely on different data. Also Mastodon.py already does this and that one of the many reasons that it's code is just really, really poor.
@shadowfacts Because after realising "Oh, we could put pagination parsing and request dispatch in the same function and make the API function call that", you realise that the thing that makes up most of your code space after that is the information construction -- making dicts from arguments.
So the step then is how to do it automagically -- after all, why draw the line here, we want optimal code size!? Well, Mastodon.py decided to (ab)use locals() to magically construct arguments.
I'm sure this worked *at first*, but then they had to deal with some unexpected problems. And because of the layers of abstraction involved, the workarounds for those problems make up most of the actual code space. When you look up how a request is handled, you can't see how it is handled, the layers of abstraction cause you to jump to 50 different places each one meaning more mental space taken up and more tabspace in the browser
@shadowfacts Because of all of this, the code blisteringly obtuse to read and understand what it is doing, and I've spotted multiple bugs in the Mastodon.py code that I just do not have the energy to chase down, verify, and file.
And what is really, really funny is that the amount of lines taken up in each requests function to preprocess the arguments to try and mangle them into the locals(), and then post-process the unneeded locals(), is all larger than it takes to write a dict, and involved like 4 levels of indirection that when reading it for the first time you have to jump around to understand.
@shadowfacts A good chunk of the code in some places is actually unnecessary, too, because requests omits fields that are None from a request. So like 90% of the work it is doing in the body of most requests, not only relies on Python Magic(tm), but is also utterly unnecessary
@shadowfacts I guess what I'm monologuing about is that there needs to be a line drawn around abstraction before it gets out of control, and that line is very personal but needs to strike a balance between simplicity and also comprehension-to-new-eyes. And the best place IMO to draw that line is where the tasks become separate tasks?
Sorry foe the rambling, it's almost 4am and my brain is all over the place lol
@shadowfacts Oh yeah like, json decoding with Python is:
r = requests.get(blah)
response = r.json()
link_header = r.links()
The link header parsing actually takes up two extra functions of about 4 - 6 lines each I think
@shadowfacts Another aspect of it is the fact that a Link header is unstandardized.
Yes, the JSON data version also is not standardized, but it's like 20x easier accessing it because you're *already* accessing JSON data. Using a link header means the client (i.e. me) and the server now both have to write a parser for that, and hope that there are no unusual quirks to the hundreds of different mastodon servers and clients out there who each have their own parser.