How Social Networks Set the Limits of What We Can Say Online

Source: 
Coverage Type: 

[Commentary]  We have handed to private companies the power to set and enforce the boundaries of appropriate public speech. That is an enormous cultural power to be held by so few, and it is largely wielded behind closed doors, making it difficult for outsiders to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms’ own definition of success includes failing users on a regular basis. The social media companies that have profited most have done so by selling back to us the promises of the web and participatory culture. But those promises have begun to sour. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies.

For more than a decade, social media platforms have portrayed themselves as mere conduits, obscuring and disavowing their active role in content moderation. But the platforms are now in a new position of responsibility—not only to individual users, but to the public more broadly. As their impact on public life has become more obvious and more complicated, these companies are grappling with how best to be stewards of public culture, a responsibility that was not evident to them—or us—at the start. For all of these reasons, we need to rethink how content moderation is done, and what we expect of it. And this begins by reforming Section 230 of the Communications Decency Act—a law that gave Silicon Valley an enormous gift, but asked for nothing in return.

[This essay is excerpted from "Custodians of the Internet" by Tarleton Gillespie, the principal researcher at Microsoft Research and associate professor at Cornell University]


How Social Networks Set the Limits of What We Can Say Online