Source: The Verge - All Posts
Article note: Hmm.
The "composed of loosely connected clusters of tightly connected individuals" thing is not surprising, basically all social systems look like that.
The "When you try to mass-deplatform a group, it leaves for somewhere it will be dominant" observation is not surprising, there are lots of visible cases.
The other thoughts are _weird_ though.
Non-unifomly banning, knocking off small groups and random problematic individuals, aiming to slowly dissociate rather than trigger a mass migration seems like a reasonable response to that information, but is the opposite of the "Consistent rules, applied consistently" thing that is required for a platform to be trustworthy for anyone.
Baiting fights between rival groups so they expend all their energy fighting is ..sometimes appealing and hilarious... but the opposite of the now-forgotten "Don't feed the trolls" rule that always seemed to keep things much less obnoxious than the present era of performative outrage.
How do you get rid of hate speech on social platforms? Until now, companies have generally tried two approaches. One is to ban individual users who are caught posting abuse; the other is to ban the large pages and groups where people who practice hate speech organize and promote their noxious views.
But what if this approach is counterproductive? That’s the argument in an intriguing new paper out today in Nature from Neil Johnson, a professor of physics at George Washington University, and researchers at GW and the University of Miami. The paper, “Hidden resilience and adaptive dynamics of the global online hate ecology,” explores how hate groups organize on Facebook and Russian social network VKontakte — and how they resurrect...
Continue reading…