Communications Decency Act § 230: A “Serious Threat” to the Public’s Health
By: Emma Smith, Legal Researcher and J.D. Candidate, ASU Sandra Day O’Connor College of Law, and
Erica N. White, J.D., Research Scholar, Center for Public Health Law & Policy, ASU Sandra Day O’Connor College of Law
On July 15, 2021, U.S. Surgeon General Dr. Vivek H. Murth released an advisory on the spread of misinformation during the COVID-19 pandemic, referring to misinformation as a “serious threat” to the public’s health. FDA Commissioner Robert Califf has made similar remarks, stating in April 2023 that misinformation is “contributing to the three- to five-year lower life expectance in the United States compared to similar countries.” While the threat of misinformation to the public’s health is not new, it ascended to unprecedented levels during the pandemic. Consequently, so has the need to address its spread legally — beginning with Section 230 of the federal Communications Decency Act (CDA) of 1996.
COVID-19 and Misinformation
Resulting in the deaths of over 1.1 million Americans, the COVID-19 pandemic represents one of the greatest public health emergencies of all time, as well as a seminal illustration of the negative impacts of misinformation on health outcomes. As reported on NPR on May 16, 2022, one-third of COVID-19 deaths are attributable to unvaccinated individuals, many of whom were concerned about COVID-19 vaccine safety despite little evidence of the same. A 2021 Kaiser Family Foundation study found that 64% of unvaccinated adults reported that they believed or were uncertain as to 4 or more misstatements about COVID-19 which influenced their choice whether to be vaccinated. Online and other sources of vaccine misinformation clearly contributed to hundreds of thousands of preventable COVID-19 deaths.
The Legal Rise of Online Misinformation
In 1995, the New York Supreme Court found in Stratton Oakmont, Inc. v. Prodigy Services Company that an internet service company was liable for defamation. The company did not make libelous statements directly, but rather solely as a “publisher” (and not a “distributor”) of such content. Furthermore, Stratton established that a platform would be considered a publisher by simply policing content. Concerned that Stratton and additional cases would discourage media development and sites from policing their content, Congress passed § 230 of the CDA the following year. Specifically, § 230 of the CDA provides legal protection from defamation claims to social media platforms that merely allow or choose not to remove harmful content posted by platform users.
Twenty-five years later, CDA’s applicability remains largely unchanged—a fact that seems unlikely to change based on recent U.S. Supreme Court assessments under the CDA and other laws. In May 2023, the Court issued its opinion in Twitter, Inc. v. Taamneh determining that internet platforms were not liable under the Anti-Terrorism Act for merely allowing users to post content onsite. Such allowances do not equate to “providing substantial assistance” under the Act. In Gonzalez v. Google, LLC, the Court avoided addressing the scope of CDA § 230, finding that the plaintiff’s complaint alleging Google was liable for an ISIS terrorist attack as ISIS utilized Google for advertisements would likely fail under Twitter. On May 30, 2023, the Court denied certiorari in Jane Does No. 1-6 v. Reddit, Inc., declining to rule on whether Reddit is protected from liability for allowing platform users to post child pornography on their site under § 230.
Collectively, the Court’s reticence to rule allows social media platforms such as Twitter, Facebook, and Reddit to continue to be shielded from liability for harmful content published by users of their sites. In turn, the “legal incentives for platforms to respond to digital misinformation on critical health issues” are minimal. While some platforms have issued internal policies to address the spread of misinformation, content creators continue to post misinformation deleterious to the public’s health openly and innovatively via new technology and the developing capabilities of artificial intelligence.
Time to Act
So much has changed since 1996 when CDA § 230 protections were enacted. The dangers of social media have become clearer as evinced during the riot on Capitol Hill on January 6, 2021 and through impacts over years related to human trafficking, suicide and self-harm, and substance abuse. Social media harms may exceed prior perceptions among legislators and the public. Yet, these harms are real, evolving, and escalating. With public health and safety at risk, the time is now to seriously re-evaluate CDC § 230 protections.
The legal information and assistance provided in this document do not constitute legal advice or legal representation. Views expressed in this piece are those of the authors alone.