Tech

Facebook and Google can't keep extremists off their sites, and Congress will be grilling them next week

Key Points
  • Terrorists are still actively recruiting on Facebook and Google.
  • Extremist pages on their sites were online for days or weeks before we alerted the companies to them.
  • Executives from Facebook, YouTube and Twitter are scheduled to testify in front of the Senate next week.

Even as executives from Google and Facebook prepare to testify in front of the Senate on how they're combating extremist content, the internet giants are struggling to keep it off their sites.

Dozens of accounts on sites owned by those two companies have been used this week to promote violent attacks and recruit people to the cause of Islamic terrorism, a CNBC investigation has found.

All of the content that was brought to the attention of Google and Facebook by CNBC was removed within 24 hours of notification. Yet many of the posts and videos, which contained graphic images and threats of violence, had been online for days or weeks before we alerted them to it.

Its presence on their pages underscores the enormous challenge these internet firms have in controlling content while remaining open platforms.

"Terrorists are using Google and Facebook technology to run what are essentially sophisticated social media marketing campaigns," said Eric Feinberg, co-founder of the Global Intellectual Property Enforcement Center, or GIPEC, which tracks extremist content online.

Extremist groups are using their tools the way brand advertisers and other online marketers do, cross-promoting videos on one account with posts on other social media services.

Facebook, YouTube and Twitter are sending representatives to Washington, D.C., on Wednesday morning to testify in front of the Senate Commerce Committee in a hearing titled "Terrorism and Social Media: #IsBigTechDoingEnough?"

CNBC initially discovered some of the violent videos and posts while reporting an earlier story on Facebook users who had been locked out of their accounts by hackers.

After that story was published, CNBC reported its findings to counter-terrorism officials at the U.S. Attorney's Office in San Francisco, because some of the content appeared to include coded messages about potential attacks over the Christmas holiday.

The office acknowledged receipt of our e-mail and said it couldn't comment further.

We then contacted GIPEC, a cyber-intelligence firm whose patented software finds social media activity produced by criminals and terrorists, and asked Feinberg if the group could locate more violent and extremist content.

"There's plenty out there if you know how to look for it," said Feinberg, who previously ran an online marketing and ad-tech firm based in New York. "These companies are playing whack-a-mole" in their fight against extremism, he said.

How to make a bomb out of a 7-Up bottle

Many of the Facebook accounts used to promote terrorist-related content appeared to have been taken over by hackers -- similar to those accounts in our prior story.

For example, they showed images of war-ravaged cities in the Middle East, or had flags from countries in the region, even though the profile page indicated the user was from a faraway place like Mexico or Brazil.

The pages also contained recent posts in Arabic while earlier posts on the profile had been exclusively in English or Spanish.

One page provided instructions for turning an empty soda bottle into an improvised explosive device (IED) like those used to kill and maim U.S. soldiers during conflicts in Iraq and Afghanistan.

According to Google's online translation service, the Arabic text reads:

"An empty plastic box containing 15 yeast bags + 100 small size sharp nails. When the yeast is brewed after exposure to the sun, it will explode and the nails will spread splinters on the infidels. In the parks of the worshipers of the Cross."

Using Facebook's online reporting system, Feinberg notified the company of the page on Jan. 10, and it was soon removed.

Another page (which hasn't been reported) contains Islamist propaganda, including texts of speeches from Abu Musab al-Zarqawi, the former leader of al-Qaeda in Iraq who was killed by a U.S. airstrike in 2006. It's been up since at least Dec. 26.

Of the six profiles CNBC reported to Facebook on Wednesday afternoon, all were removed within a day.

In response to a request for comment as to why the pages hadn't been removed earlier, a Facebook spokesperson referred us to a November blog post titled, "Are we winning the war on terrorism online," from Monika Bickert, the company's global head of policy management.

"99% of the ISIS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site," the post said. "Once we are aware of a piece of terror content, we remove 83% of subsequently uploaded copies within one hour of upload."

Bickert is scheduled to appear before the Senate on Wednesday, alongside Juniper Downs, YouTube's global head of public policy and government relations, and Carlos Monje, Twitter's director of public policy and philanthropy.

Promoting videos of an assassinated al Qaeda leader

GIPEC found similar material on YouTube and Google Plus.

One post, which violated Google's terms of service, pointed to videos made by Anwar al-Awlaki, an American of Yemeni descent who allegedly planned terrorist attacks in Saudi Arabia before being killed by a drone strike in 2011.

A YouTube spokesperson told CNBC that the company updated its rules last year to ban all content either promoted by or relating to individuals, including al-Awlaki, known to be members of the U.S. Department of State's list of foreign terrorist organizations.

YouTube published a blog post in December, saying that "98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms" and that 70 percent is removed within eight hours of being uploaded.

YouTube removed more than 150,000 videos for "violent extremism" between June and December of last year. The company sent CNBC the following statement:

"In June of last year we announced steps we are taking to combat violent extremism on YouTube, including better detection and faster removal of content, more expert partners to help identify content, and tougher standards. We've made progress with these efforts, with machine learning technology flagging content to help our reviewers remove nearly five times as many videos as they previously could. We're continuing to invest heavily in people, technology and strict policies to remove this content quickly."

While all six accounts we reported to Google were removed within 24 hours, other pages on the site still include terrorist propaganda.