De-Platform Hate?

Benton Foundation

Friday, November 2, 2018

Weekly Digest

De-Platform Hate?

 You’re reading the Benton Foundation’s Weekly Round-up, a recap of the biggest (or most overlooked) telecommunications stories of the week. The round-up is delivered via e-mail each Friday.

Round-Up for the Week of October 29 - November 2, 2018

Robbie McBeath
McBeath

A mass murderer shot and killed 11 people at the Tree of Life synagogue in Pittsburgh, Pennsylvania, on October 27, in what is believed to be the deadliest attack against the Jewish community in the United States, ever. The mass shooting followed a week of reporting on a series of bombs sent by a Florida terrorist to prominent Democrats, George Soros, and CNN.  

Both men posted violent, hateful content online, including politically extremist views on immigration. 

The events tragically bring into focus, again, the very-real danger of hateful political rhetoric. Could they also mean we’ll see tougher scrutiny of how the organized, white-nationalist movement spreads its message inside the U.S.? What are the responsibilities of social-media platforms for the discourse they monetize? 

I have written about this before, in August 2017, in the aftermath of a domestic terrorist killing counter-protester Heather Heyer in Charlottesville, Virginia:

The incidents this past weekend will be an indelible, dark mark in our nation’s history. At Benton, we believe that communications policy -- rooted in the values of access, equity, and diversity -- has the power to deliver new opportunities and strengthen communities to bridge our divides. These values are vital in a political climate swelling with hate and intolerance.

What is the responsibility of tech platforms to impede the spread of hateful speech?.... Is our President using his office to communicate in a way that stokes the flames of racial animosity?

Again, white nationalists have committed murder. And again, we have to ask what role communications policy should play in curbing hate, especially now that this speech is spreading from “mainstream” sites like Facebook and Twitter to alternative platforms.

The questions that arise when discussing the role of social media in policing hateful political rhetoric and protecting free speech are difficult to answer. Here’s how companies, activists, and policymakers are approaching the issue. 

Hate’s Alternative Platforms

The digital communication of the Tree of Life murderer and Florida terrorist tells a dark story of how hateful, political rhetoric flows online. 

Kevin Roose at the New York Times reported that the pipe bomb suspect, Cesar Sayoc, “appeared to fit the all-too-familiar profile of a modern extremist, radicalized online and sucked into a vortex of partisan furor.” 

[A] closer study of his online activity reveals the evolution of a political identity built on a foundation of false news and misinformation, and steeped in the insular culture of the right-wing media. For years, these platforms [Facebook and Twitter] captured Mr. Sayoc’s attention with a steady flow of outrage and hyperpartisan clickbait and gave him a public venue to declare his allegiance to Mr. Trump and his antipathy for the president’s enemies.

Sayoc continued to post unabated until as recently as October 24, two days after the discovery of the first bomb, and even as the authorities were conducting a manhunt. He was arrested on October 27.

Days before the Pittsburgh attack, an account matching the suspect’s name, Robert Bowers, published violent, anti-Semitic posts on Gab, a social networking site that’s become a haven for the loosely connected and somewhat ill-defined grouping of white supremacists, white nationalists, anti-semites, neo-Nazis, neo-fascists, neo-Confederates, Holocaust deniers, conspiracy theorists, and other far-right hate groups known as the ‘alt-right.’

Gab was launched in 2016 as an alternative to traditional platforms such as Twitter and Facebook. Gab’s absolutist approach to speech—including what is often outright hate speech—has made the platform a safe haven for white supremacists and other far-right communities from all over the world.

In January, Bowers signed up for an account and began sharing anti-Jewish images, conspiracy theories about Jews controlling the world, and criticism of President Trump — whom, he implied, was too accommodating of Jewish influence. 

Andrew Torba, Gab’s 27-year-old creator and primary owner, contends that he and his site are blameless for Robert Bowers’ alleged actions.

After Bowers was named as a suspect in the mass shooting, Gab released a statement saying it “unequivocally disavows and condemns all acts of terrorism and violence.”

Troba thinks the bigger problem of hate online lies with the culture of Silicon Valley, which he believes is hostile to him and his political ideas. “These are not good people,” said Torba. “Many of them hate America and freedom. They are authoritarian cultural Marxists. Some, many of whom I am still friends with, are great people who I love, but the overwhelming majority are egomaniacs lusting for power and wealth.”

“Twitter and other platforms police ‘hate speech’ as long as it isn’t against President Trump, white people, Christians, or minorities who have walked away from the Democratic Party,” he wrote this week. “This double standard does not exist on Gab.”

How To Deal With Gab

Mainstream social-media platforms have taken actions to police hate speech. Twitter and Facebook have spent the past months scrubbing their platforms of hateful content. Twitter banned former Breitbart writer Milo Yiannopoulos in 2016 and Facebook joined in banning InfoWars’ Alex Jones in August, for example.

Many companies are now withdrawing financial support for Gab. Before the shooting, Apple and Google had already prevented Gab from distributing its mobile application in their app stores; this summer Microsoft had threatened to stop hosting Gab’s website because of anti-Semitic posts. In the wake of the Pittsburgh shooting, payment processing platforms Stripe and PayPal and cloud hosting company Joyent also suspended Gab. 

The night of the shooting, Troba posted in Gab’s Twitter account:

Today http://Gab.com  spent all day working with law enforcement to ensure that justice is served. 

For this, we have been no-platformed from:@stripe @paypal @joyent 

In a matter of hours. This is direct collusion between big tech giants. @realDonaldTrump ACT!

The next day Troba wrote, “We apologize for nothing because we did nothing wrong. We are doubling down on free speech and individual liberty and nothing you say or do will change that.”

While large platforms have the market power to reduce hateful rhetoric, is that enough? Kevin Roose wrote, “Social media platforms like Facebook and Twitter, once guided by the principle of free speech, have come to realize that an anything-goes approach is ripe for exploitation, and ultimately bad for business.” 

John Herrman isn’t sure Silicon Valley gets it. In the aftermath of the tragedy in Charlottesville, he wrote:

These companies promised something that no previous vision of the public sphere could offer: real, billion-strong mass participation; a means for affinity groups to find one another and mobilize, gain visibility and influence. This felt and functioned like freedom, but it was always a commercial simulation. This contradiction is foundational to what these internet companies are. 

For Herrman, the trolls who post hate, of course, are responsible for what they do. But the companies who profit off it do as well.

[W]hat gave these trolls power on platforms wasn’t just their willingness to act in bad faith and to break the rules and norms of their environment. It was their understanding that the rules and norms of platforms were self-serving and cynical in the first place. After all, these platforms draw arbitrary boundaries constantly and with much less controversy — against spammers, concerning profanity or in response to government demands. These fringe groups saw an opportunity in the gap between the platforms’ strained public dedication to discourse stewardship and their actual existence as profit-driven entities, free to do as they please. Despite their participatory rhetoric, social platforms are closer to authoritarian spaces than democratic ones. It makes some sense that people with authoritarian tendencies would have an intuitive understanding of how they work and how to take advantage of them. [emphasis added]

Some activists understand this commercial pressure and are asking companies to put their commitments into writing.

De-Platforming Hate

Last week, the Benton Foundation joined 40 civil and human rights organizations who are asking online companies to adopt corporate policies to prohibit hateful activities on their platforms. “They should make it clear what type of conduct is and is not permitted on their platform and remove any U.S. clients that violate those corporate policies,” wrote Benton Foundation Executive Director Adrianne Furniss. “Violence, threats, intimidation, harassment, defamation, and targeting cross a line to impose will instead of reason, to give one’s interests advantage by denying another person’s safety. Peace rests on the inherent rights and dignities of every individual.”

Internet companies must stop ignoring the racism and other forms of hate that are prevalent on their platforms and acknowledge that the hateful discourse of the few silences the speech of the marginalized many. -- Free Press' Carment Scurato and Jessica González

Free Press Senior Policy Counsel Carmen Scurato and Deputy Director Jessica González, two days before the attack in Pittsburgh, wrote:

White supremacist organizations are using a multitude of internet platforms to organize, fund and recruit for their movements to normalize and promote racism, sexism, xenophobia, religious bigotry, homophobia and transphobia, and to coordinate violence and other hateful activities. These coordinated attacks not only spark violence in the offline world, they also chill the online speech of those of us who are members of targeted groups, frustrating democratic participation in the digital marketplace of ideas and threatening our safety and freedom in real life.

Scurato and González called on Internet companies to:

  • Stop ignoring the racism and other forms of hate that are prevalent on their platforms
  • Acknowledge that the hateful discourse of the few silences the speech of the marginalized
  • Make explicit and concrete commitments to tackle bigotry.

Removing Platform’s ‘Prized Legal Shield’

Is the decision to withdraw financial support from platforms of hateful content enough to reduce their influence? Will corporate policies be enough to tackle hate?  Members of Congress have proposed going further. 

“For lawmakers already concerned about incendiary, extreme content online, [Robert Bowers’s] posts offered the latest reason to consider new regulation of the tech industry writ large,” wrote Tony Romm.  “Some questioned whether Silicon Valley’s prized legal shield — a decades-old law that protects social media giants from lawsuits — might be in need of an overhaul.”

Section 230 of the Communications Decency Act, adopted in 1996, generally spares online platforms from being held accountable for what their users post on their sites. And the law helps the companies to maintain nuanced policies prohibiting certain kinds of content, including violence and hate speech, without the threat of liability if they remove a user’s post — or kick someone off entirely.

Senate Intelligence Committee Vice Chairman Mark Warner (D-VA) said: 

I have serious concerns that the proliferation of extremist content — which has radicalized violent extremists ranging from Islamists to neo-Nazis — occurs in no small part because the largest social media platforms enjoy complete immunity for the content that their sites feature and that their algorithms promote.

In an October 11 interview, Senator Warner noted that Section 230 has already seen some changes:

The social-media companies fight any changes to Section 230 as if it will provoke the complete destruction of the public square. Obviously, that is not the case. There have been changes around child pornography. And you can’t print how to make a bomb. Most recently, we’ve imposed restrictions around sex trafficking.... But perhaps there is a decency doctrine that might be industry-administered. Or is there some kind of mechanism we could impose that would say, ‘if you don’t have some kind of self-cleanup, there will be legislative changes’?

In a debate on October 16, Sen. Ted Cruz (R-TX), when asked about whether Congress should regulate online social media, said

Right now, big tech enjoys an immunity from liability on the assumption they would be neutral and fair. If they’re not going to be neutral and fair, if they’re going to be biased, we should repeal the immunity from liability so they should be liable like the rest of us.

The rationale behind tweaking Section 230 may be different: Democratic lawmakers seek to alter it to make platforms more responsible for the rhetoric they host and monetize. Republican lawmakers seek to use it to force platforms to be neutral and fair. Either way, the legal protections that have kept large platforms from being liable for the content they host may soon change if not end.

Conclusion

For many, their fears of hateful, online rhetoric turning into real-life tragedy came true again this week. Clear-cut, white-nationalist, political violence broke out, again. Two broken, delusional men had their hateful beliefs stoked by what they read and shared on social media. With each tragedy, it becomes more apparent that communications policy must grapple with the difficult task of combating online hate. 


Quick Bits

Weekend Reads (resist tl;dr)

ICYMI from Benton

November 2018 Events

Nov 13-15 -- FTC Hearing #7, Competition and Consumer Protection in the 21st Century

Nov 14 -- The Law and Economics of Data, University of Colorado Law School

Nov 14 -- Broadband Connectivity is Transforming Healthcare, NTIA webinar

Nov 15 -- Is the Platform Economy Forcing Us to Reconsider Antitrust Enforcement?, Technology Policy Institute

Nov 15 -- Broadband Legislation in the Next Congress, SHLB Coalition webinar

Nov 19 -- Advisory Committee on Diversity and Digital Empowerment, FCC

Benton, a non-profit, operating foundation, believes that communication policy - rooted in the values of access, equity, and diversity - has the power to deliver new opportunities and strengthen communities to bridge our divides. Our goal is to bring open, affordable, high-capacity broadband to all people in the U.S. to ensure a thriving democracy.


© Benton Foundation 2018. Redistribution of this email publication - both internally and externally - is encouraged if it includes this copyright statement.


For subscribe/unsubscribe info, please email headlinesATbentonDOTorg

Kevin Taglang

Kevin Taglang
Executive Editor, Communications-related Headlines
Benton Foundation
727 Chicago Avenue
Evanston, IL 60202
847-328-3049
headlines AT benton DOT org

Share this edition:

Benton Foundation Benton Foundation Benton Foundation

Benton Foundation

PUBLIC INTEREST VOICES FOR THE DIGITAL AGE


By Robbie McBeath.