#moderation

dredmorbius@joindiaspora.com

Addressing concerns with Blocklists

The use of blocklists by various groups and factions on Diaspora and similar distributed / decentralised social platforms is increasing. I'm an advocate of the practice.

There are also concerns raised. I have several myself.

I'd like this post to be a forum for discussing concerns and only concerns of blocklists, whether individual, informally shared, or at instance/pod or higher levels.

The purpose of this post is to identify problems, and propose solutions.

Problems would be issues of untended consequences or abuses of blocklisting.

OFF TOPIC are any of the following:

  • Defences of blocklists. (This post presumes legitimacy, the defence is not necessary.)
  • Individual appeals concerning profiles to be added/removed from lists.
  • Any particularly in-the-weeds discussions of viewpoints benefitted or harmed by blocklists. (General class/case mentions may be relevant, long or repeated mentions, discussions, complaints, discussions, etc., are not.)
  • Disruption, spamming, etc.

This discussion will be moderated with those aims in mind.

Many of those affected by this topic cannot comment here. This is a legitimate problem.

As a partial solution, I'll propose that re-shares or references to this post by blocked profiles can be linked back here in he discussion. Please use the full-qualified long link, including the hostname. There are a few profiles which seem to manage to straddle sufficient communities to be able to do this. I'll try to visit those links and re-share any substantive issues and solutions proposed.

This itself is one of the key problems with blocklists: Once blocked, a profile's access to even an appeals channel is severely limited.

Reshares are obviously encouraged.

#blocklists #BlocklistProblem #TheBlocklist #Ignorelist #Censorship #Moderation #FreeSpeech #ResponsibleSpeech #podmin

dredmorbius@joindiaspora.com

The Nazi-at-the-Bar problem

I was at a shitty crustpunk bar once getting an after-work beer. One of those shitholes where the bartenders clearly hate you. So the bartender and I were ignoring one another when someone sits next to me and he immediately says, "no. get out."

And the dude next to me says, "hey i'm not doing anything, i'm a paying customer." and the bartender reaches under the counter for a bat or something and says, "out. now." and the dude leaves, kind of yelling. And he was dressed in a punk uniform, I noticed

Anyway, I asked what that was about and the bartender was like, "you didn't see his vest but it was all nazi shit. Iron crosses and stuff. You get to recognize them."

And i was like, ohok and he continues.

"you have to nip it in the bud immediately. These guys come in and it's always a nice, polite one. And you serve them because you don't want to cause a scene. And then they become a regular and after awhile they bring a friend. And that dude is cool too.

And then THEY bring friends and the friends bring friends and they stop being cool and then you realize, oh shit, this is a Nazi bar now. And it's too late because they're entrenched and if you try to kick them out, they cause a PROBLEM. So you have to shut them down.

And i was like, 'oh damn.' and he said "yeah, you have to ignore their reasonable arguments because their end goal is to be terrible, awful people."

And then he went back to ignoring me. But I haven't forgotten that at all.

Emphasis added.

https://old.reddit.com/r/TalesFromYourServer/comments/hsiisw/kicking_a_nazi_out_as_soon_as_they_walk_in/

#Moderation #BlocklistMeta #TheBarProblem

dredmorbius@joindiaspora.com

Twitter’s decentralized future

The platform’s vision of a sweeping open standard could also be the far-right’s internet endgame

... Bluesky is aiming to build a “durable” web standard that will ultimately ensure that platforms like Twitter have less centralized responsibility in deciding which users and communities have a voice on the internet. While this could protect speech from marginalized groups, it may also upend modern moderation techniques and efforts to prevent online radicalization. ...

...

A widely adopted, decentralized protocol is an opportunity for social networks to “pass the buck” on moderation responsibilities to a broader network, one person involved with the early stages of bluesky suggests, allowing individual applications on the protocol to decide which accounts and networks its users are blocked from accessing.

Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe.

https://techcrunch.com/2021/01/15/twitters-vision-of-decentralization-could-also-be-the-far-rights-internet-endgame/

#twitter #bluesky #decentralisation #moderation #censorship

dredmorbius@joindiaspora.com

Youtube "age-gating" more videos, youtube-dl affected

Addressing content concerns (hate, gore, sexual nature, etc.) Youtube is age-restricting, or "age-gating", a larger set of videos. This also includes closing the loophole of viewing such videos from third-party sites. Or, apparently, download tools such as youtube-dl.

The policy change is discussed at The Verge, "YouTube is about to age-restrict way more videos":

YouTube is rolling out more artificial intelligence-powered technology to catch more videos that may require age restrictions, meaning more viewers will be asked to sign into their accounts to verify their age before watching.

Similar to how YouTube used machine learning techniques to try to better catch violent extremism and more of the platform’s most severe content beginning in 2017, and later to find videos that included hateful conduct, the same approach will be used in this case to automatically flag videos YouTube deems not age-appropriate. As a result, YouTube is expecting to see far more videos pop up with age-gated restrictions.

The company is preparing for there to be some mistakes in labeling, as is the case with any rollout of AI moderation tech. And as part of the changes, people watching YouTube videos embedded on third-party sites will be redirected to YouTube to sign in and verify their age. ...

https://www.theverge.com/2020/9/22/21449717/youtube-age-restriction-machine-learning-rollout-kids-content-monetization-creators

Youtube-dl is a command-line utility supporting media download and streaming from multiple websites, including its namesake, and provides functionality in numerous other tools (e.g., mpv). Several recent issues have been opened apparently resulting from the age-gate change.

Issue search: https://github.com/ytdl-org/youtube-dl/issues?q=is%3Aissue+is%3Aopen+age-gate+youtube

#youtube #youtube-dl #AgeGating #mpv #video #moderation #google

dredmorbius@joindiaspora.com

Thoughts on conversation-generative online discussion platforms and features

I've become increasily aware of how conversation medium and participants shapes the "quality" of conversation.

Conversation scales poorly.

It's also fragile and very easily destroyed, discouraged, or dissuaded.

The biggest issue I find on Reddit itself is that there's no notion of "thread (or post) as conversation". And absolutely no support for same. Reddit is where interesting conversations go to die.

An item is posted. It's at top-of-page for ... a few minutes or hours, possibly days ... then vanishes And no amount of activity within a thread will boost it, generally. Even those who'd participated in the discussion have no signal of any activity. The best that can happen is that members might subscribe to replies for two days. This is madness.

Put another way, Reddit's post-weighting algorithm is all but entirely determined by posting time, not activity recency. This avoids "necroposting", for both good and bad. For small niche discussion, all but entirely bad.

Problem is that Reddit's scale spans about 6-8 orders of magnitude -- subredduts of < 10 members, to > 10,000,000. One-size-fits all ... wears poorly. Most of the glaring problems are at large scale. The small subs get neglected. Clue flees.

The little-lamented Imzy had the problem of seeing Reddit's problems-at-scale, whilst utterly failing to grasp its own failures-at-inception --- no scale --- and failing to address those. Put another way, how you get to scale, by solving the problems of inception, teaches you nothing about hoe to survive at scale. The problems are entirely different.

As noted at HN, for all its copious faults, Google+ solved this particular problem well. Facebook may also (I don't use it). Microblogging platforms (Twitter, Mastodon, Fediverse) at least present individual posts within a thread well, though they seem to uniformly suck at actual threading (see: Threadreader). Diaspora ... kind of does this but was an immensely clunky slow interface for notifications & response.

But yes, as McLuhan said, "the medium is the message". It has profound impacts and influences, most not immediately apparent -- they're emergent properties.

Independent of medium, scale, expressive richness (e.g., markdown, multimedia), latency, arity, ephemerality / permanence, message size, moderation (leaf-node or trunk), culture, founding cohort, exogenous vs. endogenous motivators and incentives (or demotivators and disincentives), editability/revisability, search, organisation and management tools, protocols and standards, and much more, all matter.

I've discussed some of this at the (rather neglected) discussion of social media types and characteristics at Plexodus Wiki, see especially Platform Types and Features and Capabilities.


Adapted from a private Reddit discussion.

#media #conversations #generativity #MarshallMcLuhan #reddit #twitter #hackernews #mastodon #fediverse #diaspora #usenet #moderation #googlplus #gplus #plexodus #plexoduswiki

jollyorc@pluspora.com

I am trying to find reliable numbers on how much work in terms of social media moderation is needed if a network becomes at least sort-of-mainstream. If this article is to be believed, you can expect 10 to 20% of all submitted content to be in need of removal.

That seems insane at first glance, but I guess that spammers do deal in volume.

https://digitalsocialcontract.net/what-proportion-of-social-media-posts-get-moderated-and-why-db54bf8b2d4a?gi=cdd73670133d

#plexodus #blocking #moderation

bkoehn@diaspora.koehn.com

Attention all #newhere!

Thar be #trolls in #Diaspora!

Unlike other social networks like the soon-to-be-deceased #GooglePlus, Diaspora and other federated social networks don't have a central authority to perform #moderation. Mostly that means that there are a few lost souls whose behavior could be described as troll-like. You may find content you don't like, offensive and unwelcome comments on your posts, etc. This is an unpleasant side-effect of Diaspora's design: the platform doesn't need to bow to the rules and regulations of a single government, making it much harder to censor content. (Individual pods are subject to regulations of the government where they are hosted; but the network as a whole isn't.)

I've been trolled! What to do?

You have a few choices.

  • You can go to the user's profile page and Ignore them. Ignored users won't be able to follow you or see your posts, comment on your posts, or @mention you in their posts or comments, although they can still comment on posts of others you follow. Like an old flame, there's no way to avoid crossing paths again.

  • You can report a post or comment. This is actually a bit tricky to understand. Let's say your account is hosted on diaspora.x.com and the troll has an account hosted on diaspora.y.com. If you report the troll, the message goes to the administrator of diaspora.x.com where they are given the option of removing it from your pod. The content will still exist on other pods (remember the thing about no centralized moderation?).

  • You can reach out to the administrator of (in the above example) diaspora.y.com and make the case that the user should be banned. Realize that the troll still may open a new account somewhere else, and several trolls have been known to do this.

This is totally unacceptable!

OK. But that's also the way the system works. You can feel free to make suggestions for improving the system, or better yet, write a fix submit a pull request and make it available to everyone. I've advocated for some kind of machine-learning algorithm to help manage unpleasant content, but I haven't sat down to build one yet.

Thanks for signing up, and I hope you can see past the trolls and make some new friends here!

(Please comment on corrections, opinions, etc.)

dredmorbius@joindiaspora.com

We need to talk about "We Need to Talk About TED"

The question: how can one distinguish quality information from entertainment.

There's a fundamental problem with democratic voting processes and voting systems (such as reddit's own post and moderation processes[2] -- which are, in their defense, better than most) in assessing who's qualified to make a judgement -- and then, of course, in determining who's qualified to assess who's qualified.

And there's the question of what to value: originality, relevance, insight, agreement, correctness, humor, entertainment? Slashdot's moderation system allowed for indicating which of these a particular comment fell under, though both the moderation and presentation failed to clearly differentiate and classify among them.

A long-simmering essay finally half-baked.

#TED #filters #noise #moderation #media #infotainment