Twitter reports fall in extreme content

Twitter says it saw a decrease in the number of accounts removed for content violations and fewer attempts to manipulate the platform by spammers and bots.


Twitter reported falls in terrorism and child abuse content in its latest transparency report. (AAP)

The number of Twitter accounts removed for content violations around terrorist and child sexual exploitation fell in the second half of 2018, according to the social media platform's latest transparency report.

Twitter said it also saw a decrease in the amount of attempted platform manipulation carried out by spam and bot accounts.

The report, released by the site every six months, said that 166,513 accounts were removed for terrorism content, down 19 per cent on the previous six months.

Social media companies have been repeatedly criticised for failing to act quickly enough to remove dangerous content and accounts from their platforms.

The UK government has recently published a white paper around online harms which called for a statutory duty of care to be introduced for internet firms, which would be enforced by a new independent regulator.

Twitter said 91 per cent of these accounts were found by its internal technology tools, adding that it was seeing a steady decrease in terrorist organisations using the platform.

"This is due to zero tolerance policy enforcement that has allowed us to take swift action on ban evaders and other identified forms of behaviour used by terrorist entities and their affiliates," Twitter's legal, policy and trust and safety head Vijaya Gadde said.

"In the majority of cases, we take action at the account set-up stage, before the account even tweets.

"We are encouraged by these metrics but will remain vigilant.

"Our goal is to stay one step ahead of emergent behaviours and new attempts to circumvent our robust approach."

The number of accounts suspended for violations related to child sexual exploitation was 456,989, down six per cent on the previous report.

The social media site said it had challenged more than 194 million accounts for "spammy behaviour and platform manipulation" in the second half of 2018, with around 75 per cent of those accounts subsequently removed after failing that challenge process, which involves proving an account is being run legitimately.

The company said it saw roughly the same number of government requests for account information as the previous report, with the UK submitting the third largest number of requests: 881.

The UK also topped the list for emergency disclosure requests for the first time, which are submitted when it is believed the person linked to the account is in danger of death or serious injury.

Published 10 May 2019 at 9:56am
Source: AAP