Facebook, Google and Twitter are stepping up efforts to combat online propaganda and recruiting by Islamic militants.
But the internet companies are doing it quietly to avoid the perception they are helping the authorities police the web.
On Friday, Facebook said it took down a profile the company believed belonged to San Bernardino shooter Tashfeen Malik, who with her husband is accused of killing 14 people in a mass shooting that the FBI is investigating as an "act of terrorism".
Just a day earlier, the French prime minister and European Commission officials met separately with Facebook, Google, Twitter and other companies to demand faster action on what the commission called "online terrorism incitement and hate speech".
The internet companies described their policies as straightforward: they ban certain types of content in accordance with their own terms of service, and require court orders to remove or block anything beyond that. Anyone can report, or flag, content for review and possible removal.
But the truth is far more subtle and complicated. According to former employees, Facebook, Google and Twitter all worry that if they are public about their true level of cooperation with Western law enforcement agencies, they will face endless demands for similar action from countries around the world.
They also fret about being perceived by consumers as being tools of the government. Worse, if the companies spell out exactly how their screening works, they run the risk that technologically savvy militants will learn more about how to beat their systems.
Meanwhile, some well-organised online activists have had success getting social media sites to remove content.
A French-speaking activist using the Twitter alias NageAnon said he helped get rid of thousands of YouTube videos by spreading links of clear cases of policy violations and enlisting other volunteers to report them.
"The more it gets reported, the more it will get reviewed quickly and treated as an urgent case," he said in a Twitter message to Reuters.
What law enforcement, politicians and some activists would really like is for internet companies to stop banned content from being shared in the first place.
There have been some formal policy changes. Twitter revised its abuse policy to ban indirect threats of violence, in addition to direct threats, and has dramatically improved its speed for handling abuse requests, a spokesman said.
"Across the board we respond to requests more quickly, and it's safe to say government requests are in that bunch," the spokesman said.
Share
