Daily Bulletin


Daily Bulletin

The Conversation

  • Written by The Conversation
imageIt isn't enough just to not feed the trolls - something has to quieten them down too.Gil, CC BY

Those suffering abuse, threats, or generally unpleasant behaviour on Twitter – of which is there is much – may welcome Zero Trollerance, an initiative that aims to tackle trolls by bombarding them with tweets containing helpful life tips and advice on how to be less angry and aggressive.

A bot scans Twitter for accounts regularly spewing sexist, racist, or otherwise offensive tweets and floods them with tweets with a “six-step” plan to relieve themselves of their anger and aggression, including links to self-help videos.

The project’s creators, Berlin-based collective Peng, declare that:

The gendered forms of harassment and violence on Twitter today point to a deeper problem in society that cannot be solved by technical solutions alone. Trolls need serious, practical help to overcome their sexism, deal with their anger issues and change their behaviour.

Zero Trollerance is tongue-in-cheek, but the problem of abuse online isn’t amusing. It’s no secret that Twitter is struggling – Twitter CEO Dick Costolo admitted as much when he said he was “frankly ashamed of how poorly we’ve dealt with this issue”. Indeed, while the site does perform some degree of moderation, generally the platform has embraced – perhaps a little too tightly – a prime directive-style approach of non-intervention. The hope is that worthy and interesting comments drown out any nasty, trollish voices lurking beneath. In reality it takes very few drops of poison before the rivers of writing turn red.

Twitter’s steps in the right direction

Just last week, Twitter announced a series of measures to prevent abuse and more easily ban serial offenders. This widens the threat policy to include indirect threats and promotion of violence. It also introduces account locking procedures and temporary bans against those attacking individuals at a particular time – for example an internet “pile-on” where many strangers harass particular users. Another measure is to automatically identify accounts and tweets thought to be abusive and prevent them from propagating through Twitter, limiting their reach and the harm they cause.

This follows hot on the heels of Twitter’s quality filter, which aims to:

… remove all Tweets from your notifications timeline that contain threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts.

While these measures seem sound, intended to bend the arc of the Twitterverse towards boosting signal and filtering out noise, the reality is that what they achieve may be quite different from their aims.

imageWhat’s happening? Chances are some women are being subject to torrents of abuse.ra2studio/shutterstock.com

Threats

Most will have heard of Twitter rape threats, terrorism threats, murder threats, bomb threats, even revenge porn threats, and the various responses to them. However, of the many things I’ve discovered throughout my research into Twitter abuse, one is that identifying threats (or even trolls) is far more complex than it seems.

Just because someone may tweet a threat doesn’t automatically make it credible. They may lack the means or the inclination to carry it out – your friend tweeting that she’s “going to kill you” for breaking her laptop isn’t really.

On the other hand, language does not need to be explicit to be menacing. Imagine that an anonymous account tweets you every day with a description of the clothes your child is wearing: there is no overt threat in such tweets, but it is obviously sinister. Even the words “we need to talk” are enough to fill many of us with dread.

The issue here is that sometimes we say what we don’t mean, and sometimes we mean what we don’t say. Language is subtle and complex enough to imply things that even a can child understand, yet invisible to the most powerful computers and software on earth.

Offensive or abusive language

Filtering for offence is also not simple. We could choose to screen out racist, homophobic, and misogynistic terms, but in doing so, we are already imposing our moral judgements on others. Just like art, comedy, and beauty, offensiveness is largely in the eye of the beholder.

Some will consider “bloody hell” unacceptable, whereas others will be content to run the full gamut from mild cursing to the most offensive words in the English language. Who decides which words fall above and below the line?

To complicate matters further expletives are used much more widely and in far more diverse ways than just to convey insult. Filtering out the humble Anglo-Saxon “fuck” would certainly remove tweets where it is used to insult, but it would also remove those instances where it signals emphasis, humour, closeness, frustration, joy, and far more besides. In other words, profanity filters may sieve out some of the dregs, but measures like these can take some of the sparkle out of the conversational champagne too.

imageNot everyone’s experience of social media follows the same pattern.antoniomas/shutterstock.com

The many need protection, not just the few

Quality filter is currently only for verified users (generally celebrities or those with many followers) using Apple devices – a step that clearly excludes the vast majority of Twitter users. And in any case, verified users are not blameless when it comes to abuse. Even the affable Stephen Fry once unwittingly triggered a dogpile from his legions of followers when he responded to a critic. In other cases, famous Twitter users appear to have deployed their followers as pitchfork-wielding mobs.

Somewhat ironically, the greatest protection from abuse is currently offered only to those Twitter users who typically already have considerable power through their many followers – and often the means to pursue legal action. This is not to say that they are not also prime targets of abuse too, but that simply no user should be subjected to any scale of abuse that they are not equipped to deal with. Users should not feel compelled to befriend, beseech or threaten their attackers.

One final aspect that the quality filter cannot address is that by its very design, the Twitterverse is an oddly blinkered echo-chamber. Unlike some other social network formats, it’s possible for ten thousand users to each reply to a tweet without ever being aware that anyone else has responded. The result is that any one of us could, for one unwitting moment, end up part of a pitchfork-wielding mob that is burying someone else alive under an avalanche of online wrath.

Claire Hardaker receives funding from the ESRC, grant ref: ES/L008874/1, title: "Twitter rape threats and the discourse of online misogyny". Views in the article are the author's and not those of the Research Councils.

Authors: The Conversation

Read more http://theconversation.com/twitter-expands-its-anti-abuse-arsenal-but-has-a-long-way-to-go-to-silence-the-trolls-40249

Writers Wanted

Yes government debt is cheap, but that doesn't mean it comes risk-free

arrow_forward

Young African migrants are pushed into uni, but more find success and happiness in vocational training

arrow_forward

Ruth Bader Ginsburg forged a new place for women in the law and society

arrow_forward

The Conversation
INTERWEBS DIGITAL AGENCY

Politics

Did BLM Really Change the US Police Work?

The Black Lives Matter (BLM) movement has proven that the power of the state rests in the hands of the people it governs. Following the death of 46-year-old black American George Floyd in a case of ...

a Guest Writer - avatar a Guest Writer

Scott Morrison: the right man at the right time

Australia is not at war with another nation or ideology in August 2020 but the nation is in conflict. There are serious threats from China and there are many challenges flowing from the pandemic tha...

Greg Rogers - avatar Greg Rogers

Prime Minister National Cabinet Statement

The National Cabinet met today to discuss Australia’s COVID-19 response, the Victoria outbreak, easing restrictions, helping Australians prepare to go back to work in a COVID-safe environment an...

Scott Morrison - avatar Scott Morrison

Business News

How to Secure Home-Based Entrepreneurs from Cyber Threats

Small businesses are becoming a trend nowadays. The people with entrepreneurial skills and minds are adopting home-based businesses because of their advantage and ease of working from home. But...

News Company - avatar News Company

Why Businesses Must Consider Marketing Automation over ESPs

If you have been using email marketing for your brand you must be familiar with using ESPs such as Mailchimp, Vertical Response, or Constant Contact. These email service providers are used for s...

Kevin George - avatar Kevin George

How To Create A Better Impression With Your Business Card

There’s no doubt that done well, business cards can deliver a lot for a brand. The problem, then, is that there aren’t very many good business cards out there! This is hardly the fault of the bu...

News Company - avatar News Company



News Company Media Core

Content & Technology Connecting Global Audiences

More Information - Less Opinion