Follow

hot take: when you move from a linear timeline of viewer-selected content, or keyword-to-popularity matching to machine learning you change from a platform or discovery service to a publisher, with all the responsibilities around publication decisions, & retraction, that entails

“ah but we couldn’t make publication decisions in every nation on earth in absence of our reliance on the algorithm without hiring hundreds of editors!”

[stares at the camera] is that so?

“surely our AI can’t be considered an editor; it is unable to consume broader context and thus make decisions about whether it’s reasonable to publish an item like a human would?”

hmm you know what yeah you’re absolutely right!

so, why is it your editor?

@ticky *mumbling* you could just, hire hundreds of editors, which is what most everyone who has actually been a publisher does,

@er1n what a novel concept! that’s sure to “disrupt” the publishing industry!

@ticky I can see the issue with scaling though. apparently 500 hours of video where uploaded per MINUTE in 2018. So to review every video would require 30,000 people to be watching videos at all times to keep up, and assuming 3 eight our shifts, that is 90,000 people. Now, sure you can probably watch them at 1.5x speed and such, but people also need bathroom breaks and such.

I WOULD like to see the numbers on what % of videos have more then say, 100 views.

@ticky I mean, apparently a fairly significant % of youtube videos have 0 views, so you could filter those out: Sure, it might be something terrible, but if literally no one ever watches it, well, the total harm is pretty limited.

I would love to see an actual discussion of how hard it would be to actually DO large scale human moderation.

@Canageek I think part of the issue is that it almost certainly *is* impossible to scale human moderation to every internet user having carte blanche to upload, and especially without that job doing active harm to low-paid people. Federation is a plausible aid to that (more tight-knit groups with internal moderation) but obviously not one that solves for every problem, and definitely not the one that platform capitalism enjoys.

@ticky Right, so I'd like to see some rules about content moderation on open platforms and whatnot. The two camps seem to be 'It can't be done by humans, have to trust the algorithm' and 'humans are the only way' and it frustrates me.

Yes, we are going to need algorithms, to help filter videos to moderators. Algorithm doesn't always mean machine learning. (Any video longer then 30 seconds with more then 10K views, or more then 1K/views/hour gets reviewed by a human is technically an algorithm)

@ticky But we also need rules and likely some transparency about how youtube has mixed it moderation with business promotion interests for example. I would like to see rules firewalling those two divisions, like newspapers (at least by reputation) had between sales and editorial back in the day.

@Canageek yeah and I think that’s also a case where we need legislation which can consider these platforms publishers. Self-governance is not reasonable for them so long as these platforms are advertising supported with profit driven by clicks.

@ticky That is a very good point, and when that legislation was written the idea of promoting works wasn't considered.

Having a two tier system where anything promoted by the algorithm or humans counts as published while stuff that is just in a gallery counts as safe harbour would be one method. Would be an issue for new people just getting started out though.

@Canageek yeah I think that two-tier would make sense. I believe that it would also be worthwhile to require that algorithmic recommendation engines can be turned off (permanently, ffs Twitter) at user preference.

@Canageek I mean I think part of it is we need less of the flow to centralised platforms. I don’t have the answers to all of it, but I strongly believe part of the issue is that what we’ve built (read: centralised platforms anyone can post to) is unsustainable if we want to make it safe.

@Canageek @ticky
"Problem scaling" means "no surplus rent"

You have slum lords because it's more profitable to rent a space for $2/sf monthly without any upkeep than it is to charge $5/sf and pay $4/sf in maintenance. These silos curated with deep learning and automated feedback are slums

We probably still need deep learning to protect human moderators from being traumatized repeatedly, but only in small communities with accountability and without profit motive

@Canageek @ticky
I also had the rest of the thread appear after posting this 😌

I'm not bike shedding about moderation and scalability, though. I did this for 10 years, 3 of which were for an android app with 50k downloads

@yaaps @ticky I mean 50k isn't crazy though. A town with 50k people is tiny, we are taking about tens of millions.

@Canageek @ticky
Yeah, 50k isn't a big number. It's a relevant number

Even factoring in horrible retention, how do instance admins feel about an instance with 50k registrations and 1 admin?

I had 700 monthly active users, ~10% conversion and an average donation of $15. I also spent $85/month on hosting and about 120 hours/month doing moderation.

Fascists love role-playing Germany even if the game is not set in WW2. Want to buy that app and scale it up 10x? 😂

Sign in to participate in the conversation
Cybrespace

cybrespace: the social hub of the information superhighway

jack in to the mastodon fediverse today and surf the dataflow through our cybrepunk, slightly glitchy web portal