My (initial) response to “How to Solve the Racist Teens-on-Twitter Problem”
which was written by Fruzsina Eördögh on readwrite social
My answer to the article title “How to Solve the Racist Teens-on-Twitter Problem”:
End racism and you will stop the racist Tweets.
To me, this is exactly the direction an article with the title “How to Solve the Racist Teens-on-Twitter Problem” should take, and perhaps is why I was so disappointed with Fruzsina Eördögh’s policy solutions in the article, which deal with Twitter as technology, but not with racism per-se. I realize that my own expectations caused me to be disappointed, which made me think about what my reasons were for that disappointment in the first place.
Based on the article’s title, I thought the author might have ideas about ending racism- as in, ideas for how we reach out to teenagers (and everyone else), perhaps using the Twitter interface as a place to to make change.
This is not what the author meant. Instead, it turns out that Eördögh is suggesting that Twitter change its terms of service to be more like YouTube, and to include racism under ‘offensive content.’ Then, the author suggests that Twitter also take the responsibility of actively enforcing this 24/7 via user-flagged content. Unfortunately, this doesn’t consider the larger issues of systemic racism and discrimination within the built Internet.
“If Twitter were to adopt a flagging system like YouTube has with videos, it might be able to more-effectively communicate to teens that their hate speech is not acceptable on the platform.”
My first (policy) question is to ask why it is the responsibility of Twitter or YouTube to communicate to teens that their hate speech is unacceptable on their platform? Do racists currently think that they can post to Twitter but not to YouTube (hahaha)? Is this something that is known, and will be changed by Twitter changing its policy?
My second and third questions deal more with racism itself: How is removing their Tweets going to educate them about racism? How is it going to change how racism is perpetuated through multiple systems, both online and off?
The author then discusses YouTube’s commenting system, which doesn’t seem to fall under these same guidelines:
“YouTube’s comment system – long stereotyped as being a cesspool of the Internet – does not offer options related to hate speech, but does allow users to downvote comments or report comments for spam. Once an offensive comment has received enough down-votes, it becomes hidden in the comments section: again, a form of community policying.”
The author appears to be saying that if Twitter just had the same policies as YouTube, there would be no, or at least less, racism on Twitter, because it would be the same as on YouTube, where the community takes care of racist comments by ‘down-voting’ to hide them from view, or if it’s a video, by flagging it as racist, and then having the the companies employees remove the offensive content.
When I said this (perhaps harshly) on Twitter:
“Instead of working towards eradicating racism, @FruzsE decides hiding it from public view is the solution”
the author tweeted back at me:
“lol learn how to read. No one says YouTube is hiding racists.”
But I think that’s exactly what the system does. The very mechanism that lets people vote up and down to raise/ lower comments in/out of public view is set up to do exactly that. The site users according to Eördögh’s description, by ‘down-voting’ racist comments, do hide racists deep down in the comments, and therefore (mostly) from public view. I think that anyone who has spent time reading YouTube comments can safely say that racism is not gone simply because other site users have ‘down-voted’ racist comments. The policy mechanism of users flagging a video and having a YouTube administrator decide whether to remove it or not is also a form of hiding what people are saying.
So, instead of working towards eradicating racism, the focus of Eördögh’s article is actually how to use the terms of service and the technology itself to remove hate speech from our view. Which is a different story than ‘solving’ for racists on Twitter.
Which brings me to my point:
By focusing on terms of service instead of the actual problem- in this case teenagers who are racist- the impetus is on the people running the service to become arbiters of acceptability, while at the same time, relying on the people being attacked (or their allies) to defend themselves. Standardized policies and emailed explanations about removed content are probably not the best way to educate people about racism. Instead, it teaches racists what is acceptable to say in public. These are very different things.
The requirement for companies like Twitter or YouTube to police their users does nothing to change racism itself. It doesn’t even ‘Solve the Racist Teens-on-Twitter Problem’ since the tweets still would have to be posted first, then they would need to be flagged by another user, and then finally removed by Twitter administrators.
This solution is simply a technological bandage that doesn’t even begin to entertain the ways in which the proposed system itself perpetuates the silencing of discussion around race and racism. We can’t just focus on the currently most visible examples of individual racism, and think that silencing them is the solution. Instead, we must look to the systems themselves (such as the built internet itself, education, the prison industrial complex, housing, etc) which create and allow for racism, to understand how they work together to institutionalize racism in invisible ways. Only by rendering racism visible, and being willing to talk about it and to learn, no matter how difficult, will we be able to make lasting, actual change.
this is cross-posted at HASTAC