The Department of Justice (DOJ) recently outlined proposed reforms to Section 230 of the Communications Decency Act of 1996.[1] Section 230 has been in place since the early days of the Internet and protects online platforms from liability for certain third-party posts. It has recently become a point of contention between Big Tech and the Trump Administration.  Recently, a presidential tweet was labeled with a fact-checking message that described the content as “unsubstantiated.”[2] The President claimed the label was intended to chill his rights under the First Amendment and subsequently signed the Executive Order on Preventing Online Censorship, calling for review and clarification of the scope of Section 230. The Executive Order also calls on the Secretary of Commerce and the Attorney General to engage in rule-making with the Federal Communications Commission to clarify when a tech company could be deemed to be taking part in “not ‘taken in good faith.’”[3] Additionally, the Order encouraged the Federal Trade Commission to investigate “unfair or deceptive acts or practices” committed by online platforms.

How did this relatively small piece of legislation become the center of a heated debate?

By way of background, Section 230 shields websites from legal liability for posts, including comments, images, and videos, of third-party users. At the time this legislation was passed, the Internet was vastly different from what it is today. In the ’90s, as the tech world was beginning to grow, Congress sought to encourage that growth through statutory protections. Section 230 provides websites with immunity for posts left by users, and allows for “Good Samaritan” protection from civil lawsuits if websites remove or moderate posts that they consider to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”[4] This way, websites can still “clean up” posted content without having to worry about being targeted via lawsuits for choosing to police and self-regulate their own domains, so long as they do so in “good faith.”

Section 230 allows platforms to be available to all users to share, gather, and disseminate information. Websites, especially those with enormous platforms, host millions upon millions of individual posts every single day. Even with dedicated moderators and advanced algorithms in place, it is a huge undertaking to examine every single post to determine whether it is illegal or inappropriate; and even with a robust procedure in place, it is hardly fail-safe. But there is growing concern about who gets the final say about what is considered inappropriate and when it should be revised, removed, or labeled. Section 230 has become a flashpoint and raises complex First Amendment, online safety, and competition considerations.

The Executive Order alone might not seem like it has teeth, unless Congress agrees with the President and passes legislation that repeals or amends Section 230. But the rule-making prompted by the Executive Order could shift interpretation of the law, calling into doubt the wide protections enjoyed by tech companies. Moreover, Attorney General William Barr has been vocal about his concerns regarding Section 230 and its protections, prompting the DOJ to seriously examine the law to propose a way forward.

In February, the DOJ hosted a one-day workshop called “Section 230 – Nurturing Innovation or Fostering Unaccountability?” inviting both public and private stakeholders to confer about the law’s transformation since its enactment to the present day and whether it needs to be modified to account for this new era of Big Tech. The DOJ states that it also met with companies that attended or indicated interest in talking about Section 230, although it is unclear which companies that included.

Last month, following its 10-month review of the law, the DOJ released its recommendations for Section 230 reform.[5] Rather than seek a complete repeal of the legislation, the DOJ identified four key categories where reform should take place in order “to realign the scope of Section 230 with the realities of the modern internet.”[6] These four areas are (1) Incentivizing Online Platforms to Address Illicit Content, (2) Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content, (3) Promoting Competition, and (4) Promoting Open Discourse and Greater Transparency.[7]

The first category seeks to strip away protection from those who purposely facilitate or solicit unlawful content and allows for civil lawsuits involving child abuse, terrorism, and cyber-stalking to proceed, thus incentivizing websites to be proactive about tracking and removing illegal content. The second category proposes more government intervention through civil enforcement actions. The third category seeks to clarify that companies cannot use Section 230 to protect themselves from antitrust actions “where liability is based on harm to competition, not on third-party speech.”[8] Finally, the fourth category is aimed at refining the language of Section 230, including an addition of “good faith.”

Some argue that Section 230 should be updated to address some of the potential dangers of the growing Internet that were not present in 1996. If this effort gains more traction, many view it as imperative that tech representatives be involved in the conversation because they are the experts in devising the algorithms and training the moderators to track down illegal and harmful content. A company’s role and responsibility to police, remove, and/or label content may implicate complex First Amendment concerns. There may not be a one-size-fits-all approach to updating Section 230 to address all posted content in all types of forums. Many will be watching to see whether there will be changes to this law that has helped fuel online growth.

[2] Twitter Safety (Twitter Safety). “We added a label to two @realDonaldTrump Tweets about California’s vote-by-mail plans as part of our efforts to enforce our civic integrity policy. We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.” May 27, 2020, 10:54 p.m. tweet.
[3] Exec. Order on Preventing Online Censorship (May 28, 2020), available at
[4] 47 U.S.C. § 230(c)(2)(A).
[8] Id. at p. 4.