Tech

Facebook wants to be 'hostile environment for terrorists' as May calls for internet regulations

Key Points
  • Facebook said in a statement it wants to be "a hostile environment for terrorists."
  • The company's director of policy said Facebook works "aggressively to remove terrorist content from our platform."
  • Some critics say the company is not doing enough because it has an incentive to make content as shareable as possible.
Mark Zuckerberg, CEO of Facebook.
Source: Facebook

Facebook condemned Saturday's deadly London Bridge attacks while pledging to "aggressively remove terrorist content" from its platform, as British Prime Minister Theresa May raised the specter of imposing new regulations to restrict the dissemination of extremist content.

On Sunday, May said Britain must work with allied democratic governments to tighten Internet regulation, in order to deny terrorists a tool for planning attacks and spreading extremism.

"We cannot allow this ideology the safe space it needs to breed, yet that is precisely what the internet and the big companies that provide internet-based services provide," she said in a statement outside Downing Street.

May's comments, which added new fuel to the debate about balancing free speech in an age of terrorism, were amplified by Facebook's own response to the London attack.

In a statement, Simon Milner, director of policy at Facebook, said the social network giant wants to "provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists."

Facebook has faced criticism following a recent string of violent acts broadcast on its social network. Some accuse the company of failing to tackle terrorist recruitment and hate propaganda, as well as the spread of fake news.

Last month, Facebook CEO Mark Zuckerberg announced he would add another 3,000 employees to scrub harmful content from the network.

"Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it," Facebook's Milner said.

"We have long collaborated with policymakers civil society, and others in the tech industry, and we are committed to continuing this important work."

Britain's Prime Minister Theresa May
Getty Images

'Really frustrating'

However, social networks have made it much easier to "like" content than to immediately report it, because they have a financial incentive to make content as shareable as possible, Hany Farid, chair of the computer science department at Dartmouth, told the radio program On the Media last month.

Farid helped Microsoft develop a technology called PhotoDNA that helps combat child pornography on the Internet. He told On the Media that Facebook rebuffed him when he offered the company a similar technology, called eGLYPH, that detects online terrorist activity.

"I have to say it's really frustrating because every time we see horrific things on Facebook or on YouTube or on Twitter we get the standard press release from the companies saying, 'We take online safety very seriously. There is no room on our networks for this type of material,'" said Farid, who also serves as a senior adviser to the nonprofit Counter Extremism Project.

"And yet the companies continue to drag their feet. They continue to ignore technology that could be used and doesn't affect their business model in any significant way."

Facebook did not immediately respond to a request for comment.