Friday’s massacre exemplified the problem of expecting tech companies to self-police content.
The hate-filled terror rampage at two mosques in Christchurch, New Zealand, was meticulously designed to maximize the number of witnesses around the globe, highlighting the difficulty in putting a lid on extremist hate that spreads online.
The suspected gunman did everything he could to make his shooting spree go viral. He live-streamed the attack on social media, wearing a body camera to simulate a video game. He shared a rambling 74-page manifesto espousing white supremacy that was full of memes and easter eggs meant to invite attention from all corners of the internet and admiration from other extremists who live extremely online. The shooter had laid a trap across the internet that exploited the newsworthiness of the attack and leaned into peoples’ inclination to gawk at horror and violence. Even professional journalistic institutions gave in to the temptation to air video of the massacre.
Scrubbing the video from the internet was like playing a game of whack-a-mole. Facebook quickly removed the alleged gunman’s Facebook and Instagram accounts — but not because its algorithm or moderators had flagged the violent content in real-time. New Zealand authorities had to ask for the video to be taken down. Internet service providers in New Zealand rushed to “close off” websites that were distributing the video, but then a number of copy cat sites immediately started popping up.
It soon didn’t matter that the original video was removed. The clip had already been downloaded and re-upped online faster than tech companies could respond. Facebook alone says it removed 1.5 million videos within the first 24 hours of the attack. And those are just the clips they were able to catch.
Friday’s massacre exemplified a larger problem that’s plaguing the internet. Platforms are struggling to self-police problematic content created by its users, while the lawmakers who would ostensibly impose regulations are either too reluctant or ill-equipped to do so — and many in both camps are predisposed to treat far-right rhetoric less seriously than other forms of extremism, to boot.
As the death toll rises — now 50 lives have been taken since Friday’s shooting, making it one of the deadliest terror attacks carried out by a far-right extremist in recent memory — the attack adds extra weight to the question that tech companies, policymakers, and social media users have been asking: How do you effectively police online hate?
The shooter’s viral video outpaced social media company’s content moderation
The world’s largest tech companies were forced to scramble on Friday to keep the violent screed from spreading. Facebook said it was removing any praise or support of the shooting, and had a process to flag the digital fingerprint of disturbing materials. YouTube said it was “working vigilantly” to remove violent footage, while Twitter said it suspended the account that posted the original video. Reddit on Friday eventually resorted to taking down two infamous subreddits, r/watchpeopledie and r/gory.
Despite those efforts, videos of the attack were easy to find through simple searches online, even hours and days after the initial shooting spree. The swift dissemination highlights how ill-equipped tech companies remain in addressing the vile, racist, and excessively violent content that’s being shared on their platforms.
Took me about 30 seconds to find YouTube videos of the ripped livestream: pic.twitter.com/TFkQHIqQbf
— Jason Abbruzzese (@JasonAbbruzzese) March 15, 2019
Moderators already face an uphill battle in keeping offensive and violent content offline; the Christchurch terror attack shows the difficulty of catching deeply-problematic video live-streams in real-time.
For one, it’s generally easier for software to scan text and offensive comments as opposed to moving images in a video. But even when the technical tools exist, policing-breaking news poses unique problems. YouTube, for example, does have a system for automatically removing copyrighted content or prohibited materials, and told the Verge’s Julia Alexander that any exact re-uploads of alleged shooter’s videos would be automatically deleted. But the algorithm can’t be used to tamp down on edited versions of the Christchurch massacre, because Youtube wants to “ensure that news videos that use a portion of the video for their segments aren’t removed in the process”:
YouTube’s safety team thinks of it as a balancing act, according to sources familiar with their thinking. For major news events like yesterday’s shooting, YouTube’s team uses a system that’s similar to its copyright tool, Content ID, but not exactly the same. It searches re-uploaded versions of the original video for similar metadata and imagery. If it’s an unedited re-upload, it’s removed. If it’s edited, the tool flags it to a team of human moderators, both full-time employees at YouTube and contractors, who determine if the video violates the company’s policies.
That process is not just traumatizing for the individual moderators who are forced to watch the horrific footage, it’s also an imperfect system to limit its reach — particularly in a fast-moving event like Friday’s tragedy.
Tech companies are expected to self-police. So far, they’re falling short.
At this point, in theory, tech companies should be well-practiced in the art of blocking far-right hate speech and violence from their platforms. They’ve been having to deal with it for years.
After the 2017 Unite the Right rally of neo-Nazis and white supremacists in Charlottesville, Virginia — where a woman was mowed down and killed by an avowed Nazi sympathizer — tech companies faced intense public pressure to block prominent instigators of explicit far-right extremism. Twitter suspended a bunch of white supremacists and prominent provocateurs — including Milo Yiannoppolis, Alex Jones, and Gavin McInnes — but was hesitant to target other alt-right leaders like Richard Spencer. Gab and the Daily Stormer, two havens for neo-Nazis, were similarly banished to the darker recesses of the Internet. Reddit quarantined hate-fueled subreddits, while other companies like PayPal, GoDaddy, Squarespace blocked white supremacists from using their services.
In effect, individual leaders and groups were targeted in response to a high-profile flashpoint in American politics and culture. But for many critics, those actions were hollow in addressing the underlying proliferation of racist and white supremacist ideas that are peddled online.
And even minimal efforts at reform have come with costs for the social media giants — big ones. As Vox’s Emily Stewart noted after Facebook’s stock saw the biggest one-day drop in history last fall (with $119 billion wiped off of its value after the company reported slower-than-expected revenue growth), social media companies’ efforts to address issues with their platforms garner “enormous backlash from Wall Street.”
The message from investors is clear: They’re nervous about what bad headlines and subsequent changes from social media platforms could do to their bottom lines. If Twitter and Facebook police their sites in a way that affects engagement or cracks down on content, or if privacy controls that ask users to opt in to their data being shared lead to more of them opting out, ad dollars could fall. And hiring workers to increase privacy protections and monitor activity is expensive.
… This week offers a lesson we don’t necessarily want executives to take away: try to be better, and potentially be severely punished by investors.
Many companies only start to take action on long-standing issues when the financial risks of not doing anything become higher than the likely costs they’ll encounter.
YouTube, for example, is under fire for failing to adequately combat conspiracies and prevent child exploitation from being circulated. Its algorithm has a troubling record of surfacing and recommending content that violates its own policies. Major advertisers —including Disney and Nestle — started to bolt earlier this year after finding that their ads were appearing in videos full of offensive and sexually explicit comments aimed at children. In response, YouTube purged hundreds of its users and said it would change the way new videos are elevated and surfaced, following up on a crackdown in 2017 from reports that videos full of predatory comments were being recommended to kids.
Some lawmakers are growing impatient with tech companies’ self-regulation — but it’s not clear they can do it any better
Even as platforms have tried to regulate themselves in recent years, some policymakers’ patience for letting them do so is growing short. But the legislative solutions some of them have proposed — or lack thereof — also struggle to match the pace of change in internet culture and the communities that foster extremist ideas and behaviors.
Congress so far has struggled to grapple with — or even understand — the many tentacles of problems plaguing social networks, from tackling the spread of misinformation to regulating how sites handle user data and privacy.
Some members of Congress have been woefully ill-prepared to even talk about tech issues (during one hearing last year, a lawmaker asked the Google CEO questions about his iPhone). And even when they are interested and equipped to talk about regulating the internet, many US lawmakers have been “reticent to clamp down at the risk of harming growth,” Stewart noted:
In a Senate hearing in April, Sen. Orrin Hatch (R-UT) asked Zuckerberg what “sorts of legislative changes” he thought should be enacted to prevent a Cambridge Analytica repeat. Sen. Lindsey Graham (R-SC), who also pressed Zuckerberg on whether Facebook is a monopoly, asked the executive to submit some proposed regulations to him.
Still, interest is growing. In the 2020 presidential primary race, Democratic candidates have vowed to take on Big Tech — Sen. Elizabeth Warren has gone as far as proposing to break up Google, Facebook, and Amazon, while Sen. Amy Klobuchar is expected to make tech reform a banner issue for her campaign.
There’s a growing appetite for reform elsewhere in the world. The European Union took a stand on privacy concerns with General Data Protection Regulation Act, or GDPR, a law enacted last year to compel transparency around the data that companies collect and how it is used. And now some countries want crack down on extremist content, too.
A British Parliamentary committee wants Facebook to be held legally liable for the content posted on the platform. The legislative body recently wrapped up an 18-month investigation into the social media site, finding that it violated data privacy and competition laws. And in the wake of the Christchurch terror attacks, British officials are threatening that tech companies be “prepared to face the force of the law” if they don’t put a lid on the spread of hateful messages.
You really need to do more @YouTube @Google @facebook @Twitter to stop violent extremism being promoted on your platforms. Take some ownership. Enough is enough https://t.co/GTSgRufOow
— Sajid Javid (@sajidjavid) March 15, 2019
The response to Islamic extremism online is often treated much differently than white supremacy
It’s well documented that social media has played an important role in helping fuel extremism and hate. Just look to the spread of ISIS, which notoriously leveraged and exploited platforms to recruit new members and promote propaganda. But more often than not, US authorities focus on Islamic extremism, even as homegrown right-wing terror has begun to have its moment.
That holds true for the tech companies as well. Even as they worked up solutions to combat ISIS online, they’ve been flat-footed in their response to white nationalism and white supremacy. Last year Motherboard found that while YouTube was cracking down on videos of ISIS recruits, footage promoting neo-Nazi propaganda stayed online for months and even years.
And when researchers from Program on Extremism at George Washington University compared far-right extremism with ISIS online behavior, they found that the growth in white nationalist movements outpaced Islamic extremism by virtually every metric.
The white nationalist datasets examined outperformed ISIS in most current metrics and many historical metrics. White nationalists and Nazis had substantially higher follower counts than ISIS supporters, and tweeted more often. ISIS supporters had better discipline regarding consistent use of the movement’s hashtags, but trailed in virtually every other respect. The clear advantage enjoyed by white nationalists was attributable in part to the effects of aggressive suspensions of accounts associated with ISIS networks.
Part of that could be the difficulty companies face in identifying offensive far-right content. As seen with the Christchurch manifesto, far-right extremism has a unique life online with its own language that’s embedded in memes and “shitposts” and difficult to decipher. As Vox’s Aja Romano outlines in an fantastic rundown of the manifesto’s underlying message, the alt-right has mastered the art of online trolling to “distort what their actual message is, so they can claim plausible deniability that their message is harmful or bad.”
But leaving it unchecked has consequences: The surge in online activity coincides with a rise in real-world hate, particularly in the US. One study found that the number of far-right terror attacks in America more than quadrupled over the first year of Donald Trump’s presidency.
In the last year alone, there have been a number of high-profile flare-ups of far-right violence. A US Coast Guard and self-proclaimed white nationalist had stockpiled weapons and ammunition with plans to stage an attack targeting Democratic politicians, journalists and judges. Last fall’s Pittsburgh shooting targeting Jews at the Tree of Life synagogue left 11 dead. In October, a man sent 13 pipe bombs to prominent Democrats and critics of Trump.
None of those incidents prompted major reform efforts on tech companies’ parts. But in light of the graphic massacre in New Zealand, there’s a chance the conversation around right-wing extremism may change. The staggering violence of ISIS’s campaign helped define it as a terror-driven organization and made tech companies and governments alike get serious about combatting its propaganda online. Are they prepared to do the same with white supremacy?
The Christchurch shooter livestreamed his attack. The video was disseminated across the internet even as platforms desperately worked to remove it.
— Ellie Hall (@ellievhall) March 16, 2019
It will be interesting to see if platforms implement a similar “zero-tolerance, you post this video or a still image from it and you’re permabanned” policy with the Christchurch attack video.
— Ellie Hall (@ellievhall) March 16, 2019
Comments
Comments are disabled for this post.