Skip to Content

Social media platforms vowed to rein in extremism. Buffalo puts them to the test

By Clare Duffy and Donie O’Sullivan, CNN Business

In the wake of Saturday’s mass shooting in Buffalo, New York, Big Tech platforms scrambled to stop the spread of a video of the attack filmed by the suspect and a document allegedly also produced by him where he outlines his beliefs.

Major social media platforms have tried to improve how they respond to the sharing of this kind of content since the mass shooting in Christchurch, New Zealand, in 2019, which was streamed live on Facebook. In the 24 hours after that attack, Facebook said it removed 1.5 million copies of the video. Experts in online extremism say such content can act as far-right terrorist propaganda and inspire others to carry out similar attacks; the Buffalo shooter was directly influenced by the Christchurch attack, according to the document he allegedly shared.

The stakes for addressing the spread of such content quickly are significant. “This fits into a model that we’ve seen over and over and over again,” said Ben Decker, CEO of digital investigations consultancy Memetica and an expert on online radicalization and extremism. “At this point we know that the consumption of these videos creates copycat mass shootings.”

Still, social media companies face challenges in responding to what appears to be users posting a deluge of copies of the Buffalo shooting video and document.

The response by Big Tech

Saturday’s attack was streamed live on Twitch, a video streaming service owned by Amazon that is particularly popular with gamers. Twitch said it removed the video two minutes after the violence started, before it could be widely viewed but not before it was downloaded by other users. The video has since been shared hundreds of thousands of times across major social media platforms and also posted to more obscure video hosting sites.

Spokespeople for Facebook, Twitter, YouTube and Reddit all told CNN that they have banned sharing the video on their sites and are working to identify and remove copies of it. (TikTok did not respond to requests for comment on its response.) But the companies appear to be struggling to contain the spread and manage users looking for loopholes in their content moderation practices.

CNN observed a link to a copy of the video circulating on Facebook on Sunday night. Facebook included a warning that the link violated its community standards but still allowed users to click through and watch the video. Facebook parent company Meta said it had removed the link after CNN asked about it.

Meta on Saturday designated the event as a “terrorist attack,” which triggered the company’s internal teams to identify and remove the account of the suspect, as well as to begin removing copies of the video and document and links to them on other sites, according to a company spokesperson. The company added the video and document to an internal database that helps automatically detect and remove copies if they are reuploaded. Meta has also banned content that praises or supports the attacker, the spokesperson said.

The video was also hosted on a lesser known video service called Streamable and was only removed after it had reportedly been viewed more than 3 million times, and its link shared across Facebook and Twitter, according to The New York Times.

A spokesperson for Streamable told CNN the company was “working diligently” to remove copies of the video “expeditiously.” The spokesperson did not respond when asked how one video had reached millions of views before it was removed.

Copies of the document allegedly written by the shooter were uploaded to Google Drive and other, smaller online storage sites and shared over the weekend via links to those platforms. Google did not respond to requests for comment about the use of Drive to spread the document.

Challenges for addressing extremist content

In some cases, the big platforms appeared to struggle with common moderation pitfalls, such as removing English-language uploads of the video faster than those in other languages, according to Tim Squirrell, communications head at the Institute for Strategic Dialogue, a think tank dedicated to addressing extremism.

But the mainstream Big Tech platforms also must grapple with the fact that not all internet platforms want to take action against such content.

In 2017, Facebook, Microsoft, YouTube and Twitter founded the Global Internet Forum to Counter Terrorism, an organization designed to help promote collaboration in preventing terrorists and violent extremists from exploiting their platforms that has since grown to include more than a dozen companies. Following the Christchurch attack in 2019, the group committed to prevent the livestreaming of attacks on their platforms and to coordinate to address violent and extremist content.

“Now, technically, that failed. It was on Twitch. It then started getting posted around in the initial 24 hours,” Decker said, adding that the platforms have more work to do in effectively coordinating to remove harmful content during crisis situations. Still, the work done by the major platforms since Christchurch meant that their response to Saturday’s attack was faster and more robust than the reaction three years ago.

But elsewhere on the internet, smaller sites such as 4chan and messaging platform Telegram provided a place where users could congregate and coordinate to repeatedly re-upload the video and document, according to Squirrell. (For its part, Telegram says it “expressly prohibits” violence and is working to remove footage of the Buffalo shooting.)

“Many of the threads on 4chan’s message board were just people demanding the stream over and over again, and once they got a seven-minute version, just re-posting it over and over again” to bigger platforms, Squirrell said. As with other content on the internet, videos like the one of Saturday’s shooting are also often quickly manipulated by online extremist communities and incorporated into memes and other content that can be harder for mainstream platforms to identify and remove.

Like Facebook, YouTube and Twitter, platforms like 4chan rely on user generated content, and are legally protected (at least in the United States) by a law called Section 230 from liability over much of what users post. But whereas the mainstream Big Tech platforms are incentivized by advertisers, social pressures and users to address harmful content, the smaller, more fringe platforms are not motivated by a desire to protect ad revenue or attract a broad base of users. In some cases, they desire to be online homes for speech that would be moderated elsewhere.

“The consequence of that is that you can never complete the game of whack-a-mole,” Squirrell said. “There’s always going to be somewhere, someone circulating a Google Drive link or a Samsung cloud link or something else that allows people to access this … Once it’s out in the ether, it’s impossible to take everything down.”

The-CNN-Wire
™ & © 2022 Cable News Network, Inc., a WarnerMedia Company. All rights reserved.

Article Topic Follows: CNN - Social Media/Technology

Jump to comments ↓

CNN

BE PART OF THE CONVERSATION

KION 46 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content