The Joint Center Looks at How the Law that Created the Internet Impacts Black Communities
Friday, May 9, 2025
Weekly Digest
The Joint Center Looks at How the Law that Created the Internet Impacts Black Communities
You’re reading the Benton Institute for Broadband & Society’s Weekly Digest, a recap of the biggest (or most overlooked) broadband stories of the week. The digest is delivered via e-mail each Friday.
Round-Up for the Week of May 4-9, 2025
Some longtime readers may recall that Al Gore created the internet. Although they also know that some credit should be shared with Section 230 of the Communications Decency Act. The "law that created the internet" generally immunizes online platforms such as Facebook, YouTube, Amazon, and Uber from liability for third-party user content (e.g., posts, comments, videos) and for the moderation of that content. The law has long been discussed as both a protector of free speech and an enabler of platform abuse. However, researchers at the Joint Center for Political and Economic Studies recently brought to light a widely overlooked aspect of Section 230's implementation and the effects on our online platforms: the impact on Black communities.
Original research co-authored by Spencer Overton, Patricia Roberts Harris Research Professor of Law at the George Washington University Law School and former Joint Center president, and Catherine Powell, Eunice Hunton Carter Distinguished Research Scholar, Professor of Law, explores how Section 230 creates opportunities and dangers alike for Black communities online. “The Implications of Section 230 for Black Communities,” sponsored by the Joint Center, is the first time Section 230 has been explored solely from a Black perspective. Joint Center Director of Technology Policy Danielle Davis also wrote three briefs highlighting the main points of this research.
Their analysis explores arguments for and against amendments to Section 230. The following is a quick recap of this new work.
I. Third-Party Content
Immunity from liability for third-party content empowers Black communities, enables free expression, and fosters innovation, but also allows harmful practices to persist.
Section 230 of the Communications Decency Act grants online platforms immunity from liability for user-generated content. Subsection (c)(1) specifically grants platforms immunity from being treated as the publisher or speaker of any information provided by another user.
This legal protection ensures that, unlike traditional media platforms, online platforms—such as Facebook, YouTube, and X (formerly known as Twitter)—are not treated as publishers or speakers of platform users’ content. As a result, these platforms can host (and remove) a wide range of content without being held legally responsible for what users say or share.
While Section 230’s broad immunity supports the flourishing of Black online communities, it also leads to a lack of accountability for content on these platforms that can negatively affect Black users’ online experiences. For instance, this immunity is instrumental in fostering the growth and innovation of digital spaces, particularly benefiting diverse communities such as Black online communities. Section 230 provides online spaces where social activism can thrive, where issues often overlooked by traditional media can be amplified, and where Black-owned businesses and creatives can grow and launch successful careers in music, video, and other creative fields. But the same protections that encourage free expression also allow harmful online activities, such as anti-Black harassment, white supremacist organizing, and discriminatory practices in housing, employment, and credit to persist in online spaces.
II. Content Moderation
How the Communications Decency Act affects platform moderation practices, including potential biases in content moderation that often disproportionately impact Black users.
Courts have interpreted the Communications Decency Act to grant platforms broad immunity for their content moderation decisions. As a result, private platforms can proactively remove or downrank harmful content that affects Black communities, while also amplifying content that counters anti-Black racism and disinformation. However, Section 230’s content moderation immunity presents not only these benefits but also significant challenges for Black users.
Section 230’s content moderation immunity also gives platforms the ability to engage in discriminatory content moderation such as removing content from Black creators, even if creator follows the platform's guidelines. Content moderation immunity also provides platforms with immunity for choosing not to remove harmful anti-Black content for reasons such as boosting user engagement, increasing ad revenue, or responding to political pressure.
III. Section 230 Reforms
The implications of various Section 230 reform proposals with a focus on their unique impact on Black communities.
While Section 230 does not fully meet the needs of internet users today, fully repealing Section 230 is not a reasonable solution. Doing so could stifle political activism, entrepreneurship, and free expression, particularly in Black communities, while simultaneously worsening the very issues Section 230 seeks to address, such as disinformation, discrimination, and hate speech. Yet failing to amend Section 230 enables the continuation of harmful content moderation practices that allow illegal harassment and the organization of white supremacist groups to thrive online.
Potential reforms to Section 230 will not automatically hold platforms liable for all content. Plaintiffs would still need to meet legal standards, and courts could allocate damages between platforms and other responsible parties. Reforms could also encourage over-moderation, as platforms may respond by excessively restricting content to avoid legal liability. Therefore, any proposed changes must be carefully designed to avoid unintended consequences that could negatively impact Black communities.
Reforms must make it clear that platforms are not protected when their algorithms or data practices contribute to unlawful discrimination. In the absence of legislative changes, courts should apply the “material contribution” test to determine how platform design and algorithms enable illegal activities that disproportionately harm Black communities.
Each potential reform has its pros and cons, and the best path forward may involve a combination of solutions. However, even reforms designed to benefit Black communities could face backlash, resulting in federal or state policies that weaken content moderation and amplify harmful content, including hate speech and white supremacy, targeting Black people.
1. Civil Rights Carve-Outs
One potential solution is to create a carve-out in Section 230 to enforce civil rights laws. Section 230 already includes exceptions for federal criminal law, intellectual property law, and federal sex trafficking laws, so adding a civil rights carve-out would be consistent with existing practices.
The fight against discrimination for Black communities is just as critical as for other groups benefiting from a civil rights carve out. Davis says such a carve-out would hold social media platforms accountable for discriminatory practices, such as housing ads excluding Black users or biased treatment of Black guests on rental and ride-sharing platforms. It would reinforce the principle that platforms facilitating discrimination should not be shielded by Section 230 protections. Allowing platforms to profit from discriminatory ads—while being illegal in print under the Fair Housing Act—is indefensible.
Defining the civil rights carve-out presents challenges. The complexity of varying state laws could make it difficult to define violations, possibly prompting platforms to over-moderate content on race to avoid legal risks. One way to mitigate this risk is to limit the carve-out to specific federal statutes, such as the Voting Rights Act or the Ku Klux Klan Act. These laws, along with provisions like those in the Violence Against Women Act targeting hate crimes, could hold platforms accountable for facilitating violence or suppressing voting rights.
2. Algorithmic Recommendations Carve-Outs
Another potential carve-out to Section 230 could target platforms using algorithms to deliver and amplify content, addressing the several challenges faced by Black communities. Platforms often use algorithms to drive engagement and profits, which can result in discriminatory practices, such as directing housing and job ads to white users while excluding Black users.
While some algorithm-driven practices may already fall outside of Section 230 immunity when they materially contribute to illegal activities such as discriminatory ad targeting, explicitly exempting algorithmic decisions from Section 230 would ensure a consistent legal standard. This would hold platforms accountable, not for the content of third-party ads, but rather for using algorithms that promote discrimination, such as steering housing ads away from Black users. Without this exemption, platforms are incentivized to profit from illegal discrimination and anti-Black practices.
Still, a broad carve-out for algorithmic content could also lead to unintended consequences. Rather than blaming the technology itself, accountability should focus on those who use it unlawfully. While some argue that current legal interpretations already exclude algorithms from Section 230 when they contribute to illegal acts, a statutory clarification would ensure courts consistently apply the material contribution test. Importantly, an algorithmic recommendation carve-out would not address lawful but harmful content, such as hate speech, which algorithms may amplify. It also raises constitutional concerns, particularly in defining terms such as “algorithm” and “amplification."
3. Ad Carve-Outs
Another reform proposal suggests removing Section 230 immunity for paid advertisements, commonly referred to as the “ad carve-out.” This proposal would clarify that platforms are not immune from liability for economic discrimination in housing, employment, and financial services ads that are directed toward white users and exclude Black users. Black consumers are often disproportionately targeted by deceptive practices, such as payday lending schemes and student debt relief fraud. By implementing the ad carve-out, Davis says platforms would be encouraged to better vet their advertisers and prevent their use in fraudulent or discriminatory activities.
The reform would strip platforms of immunity, whether discrimination arises from biased advertisers using platform tools for racial targeting or algorithms and data collection that deliver ads discriminatorily without advertisers’ knowledge. It also addresses ads that target Black voters with false information aimed at suppressing voter turnout, such as misleading boycott campaigns.
Section 230 protects platforms from liability for user-generated content due to the overwhelming volume of content, but monitoring ads is more feasible given the smaller volume of ads. Therefore, it is reasonable to expect platforms to monitor paid ads and prevent profit from discriminatory practices.
4. Notice and Takedown Proposals
Notice-and-takedown reform would require platforms to remove illegal content within a specified timeframe after being notified, or risk losing legal immunity. This process is already in place for copyright infringement in the United States and for illegal content in the European Union, New Zealand, and South Africa, as well as defamatory content in the United Kingdom.
Unlike other proposals, the notice-and-takedown approach would likely impose fewer costs on platforms, reducing the risk of service cutbacks or over-moderation that could negatively impact Black activists, entrepreneurs, or creators. Platforms would only be held responsible for content they are made aware of and fail to remove within the designated timeframe, instead of being liable for all unlawful content on their platform.
Yet this proposal does not fully address the challenges Section 230 presents for Black communities. Black users may lack the resources to secure a court order proving that content violates federal or state laws, which is required under this reform. Additionally, bad actors could exploit the system to silence Black voices. For instance, a platform might take down a post advocating “Save Black Lives – De-Unionize the Police” due to a baseless complaint from white supremacists claiming the content violates federal law. Platforms might find it easier to comply with such takedown requests than challenge them.
5. Content Neutrality Proposals
Content neutrality proposals are based on claims that tech companies selectively enforce guidelines to censor certain content, such as disinformation about election fraud claims or alternative COVID-19 treatments, hate speech such as dehumanizing language about Black, and immigrant communities, and incitement to violence such as “stop the steal” posts tied to the January 6, 2021 attack on the U.S. Capitol. Proponents often argue that these reforms are necessary to protect conservative speech from suppression by tech companies.
Some proposals in Congress aim to remove the phrase “otherwise objectionable” from the types of content platforms can moderate. These reforms seek to limit platforms’ ability to moderate content, which poses significant challenges because much harmful content—such as hate speech, voter suppression, and disinformation—may not fit into categories such as “obscene” or “violent.” As a result, Black communities could face increased exposure to unchecked harmful content. Additionally, platforms including short-term rental and ride-sharing apps could lose the ability to remove users engaging in racial discrimination.
Other content-neutrality reforms propose removing Section 230 immunity for platforms that restrict speech, including proposals to ban moderation of all content except illegal material or target platforms for perceived political bias. States, including Texas and Florida, have passed laws limiting platforms’ moderation of user viewpoints or prohibiting the de-platforming of political candidates.
The argument that content moderation stifles free speech misunderstands the First Amendment, which applies to the government, not private companies. Social media platforms, as private entities, have the right to moderate content under their terms, and users agree to follow these community standards, which often prohibit hate speech, extremist content, and misinformation; or engaging in impersonation, bullying, or harassment.
Forcing platforms to treat all lawful content equally would exacerbate problems for Black communities, allowing hate speech, white supremacist content, and disinformation to spread. Currently, platforms have the discretion to remove harmful content, but content neutrality reforms would restrict their ability to do so.
6. Size-based Carve-Outs and Disclosure Requirements
Several proposed reforms to Section 230 can be combined with additional reforms, such as size-based exemptions or disclosure mandates. A size-based exemption could apply to platforms with more than five million users or $100 million in annual revenue, focusing on larger companies that play a significant role in spreading harmful content and have the resources to comply with new regulations. These exemptions are often included in reform proposals related to civil rights, algorithms, advertising, notice-and-takedown proposals, and content-neutrality proposals.
Size-based exemptions would allow small, Black-owned startups to innovate without facing heavy regulatory burdens while ensuring more oversight for larger social media and sharing-economy platforms where Black users are highly active. But this approach has a downside: many threats to Black communities originate from smaller platforms such as 8chan, Gab, and Parler, all of which serve as hubs for white supremacy. Exempting these platforms could allow Section 230 protections to continue shielding harmful activities on these sites.
Disclosure reforms could require platforms to regularly share information about their content moderation standards and efforts, often as part of other Section 230 reforms. Such mandates would encourage platforms to engage in responsible moderation of harmful content such as hate speech, white supremacy organizing, and disinformation, without requiring government-imposed removal, which could raise constitutional concerns. Transparency would also shed light on algorithms and content practices, helping Black communities and policymakers better understand how discrimination, hate speech, and disinformation spread under Section 230 protections.
Excessive disclosure requirements, however, could deter platforms from moderating harmful content targeting Black communities. If platforms were required to provide detailed justifications for every removed or downranked post, along with potential penalties for content removal, then this could stifle effective moderation. Still, large platforms, particularly those already complying with regulations under the European Union’s Digital Services Act, could adapt to these requirements without significant burdens, reducing the risk of over-moderation.
IV. FCC on Section 230?
Federal Communications Commission Chairman Brendan Carr has expressed strong views on Section 230 and "the promotion of freedom of speech," while signaling his wish to rein in Big Tech. In March 2025, Chairman Carr sent a letter to YouTube and parent company Alphabet alleging that the companies were discriminating against faith-based programming.
In Project 2025, Carr proposed that the FCC change its interpretation of Section 230 in a way that eliminates the expansive, non-textual immunities that courts have read into the statute, among other modifications to the agency's interactions with the law. Chairman Carr proposed that the FCC should work with Congress on more fundamental Section 230 reforms that go beyond interpreting its current terms. Congress, he says, should ensure that platform companies no longer have carte blanche to censor protected speech.
For legislation to alter Section 230, Carr says that Congress should focus on general-use platforms instead of specialized ones. Similarly, Congress could legislate in a way that does not require any platform to host illegal content, child pornography, terrorist speech, or indecent, profane, or similar categories of speech that Congress has previously carved out.
Some of these proposed changes would certainly impact the Joint Center's concerns for Black communities online.
See the Research
- The Implications of Section 230 for Black Communities
- The Impact of Section 230 Reforms on Black Communities
- Opportunities and Challenges for Black Communities Due to Content Moderation
- Black Communities and the Immunity of Platforms Regarding Third-Party Content
Quick Bits
- Sen Capito Urges Sec Lutnick to Expedite BEAD Review Process
- Broadband Expansion Requires Federal and State Coordination
- Trump’s Tech Governance: Making Sense of the First 100 Days
- California Affordable Internet Bill Advances
Weekend Reads
- Public Knowledge Joins 16 Groups Urging FTC To Define Digital Ownership
- Estimating private costs in a descending clock auction: The FCC’s rural digital opportunity fund
- Resilience in telecommunications networks: A Korean case study
ICYMI from Benton
- Building on Our Foundation Towards Broadband for All
- The Digital Divide Isn't Getting Any Younger
- An Open Letter on the Future of BEAD
- What GAO Learned About Federal Broadband Programs
Upcoming Events
May 8––Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation (Senate Commerce Committee)
May 13––How Americans Feel About AI—and Why It Matters for Policy (Center for Data Innovation)
May 14-15––Community First: The Future of Public Broadband Conference and Hill Day (American Association for Public Broadband)
Jun 1––Fiber Connect 2025 (Fiber Broadband Association)
The Benton Institute for Broadband & Society is a non-profit organization dedicated to ensuring that all people in the U.S. have access to competitive, High-Performance Broadband regardless of where they live or who they are. We believe communication policy - rooted in the values of access, equity, and diversity - has the power to deliver new opportunities and strengthen communities.
© Benton Institute for Broadband & Society 2025. Redistribution of this email publication - both internally and externally - is encouraged if it includes this copyright statement.
For subscribe/unsubscribe info, please email headlinesATbentonDOTorg