Photo Credit: https://www.wkyufm.org/post/free-speech-debate-swirls-officials-block-social-media (last visited April 7, 2021).
Written By: Robert J. Griggs
Member, American Journal of Trial Advocacy
The First Amendment, at its core, is used to foster “an uninhibited marketplace of ideas,”[i] testing the truth of various ideas “in the competition of the marketplace.”[ii] Social media sites such as Facebook and Twitter are important venues for people to express themselves and exercise their First Amendment rights. The Supreme Court has recognized that the internet and social media sites are important places for people to “speak and listen,” observing that “social media users employ these websites to engage in a wide array of protected First Amendment activity.”[iii] Users of Facebook, Instagram, Twitter, or YouTube can use these platforms to express their political views, their art, post news stories, or simply post about their lives. In early 2018, the Pew Research Center conducted a study that found 68% of U.S. adults use Facebook, 35% use Instagram, and 24% use Twitter.[iv] These sites not only allow users to post their content, but they also allow users to connect with others. This interconnectedness allows users to find friends and to find others with similar interests and beliefs. Users of these platforms can also send their content to others and can allow select people to view their content. Through moderators and the use of algorithms, the platforms can decide how content is displayed to other users.[v] Social media platforms also have the power to edit user content, combine content, draft their additions to the content, and even create advertisements with the user’s content.[vi] These platforms are free for users and make revenue through ads that are targeted specifically towards their users.[vii] Social media companies have become aware of their substantial role in providing a platform for speech. [viii] One example of this is a hearing before the Senate Select Committee on Intelligence in 2018 with the founder of Twitter, Jack Dorsey. Dorsey, throughout the hearing, emphasized the importance of “free and open exchange,” and how Twitter is designed to be a “digital public square.”[ix] However, contrary to Dorsey, Twitter, and social media sites do not have “free and open exchange,” as these sites have moderators and policies under which they may remove and censor content. These sites also determine, through algorithms and moderators, how user content is presented, who can see it, when users see it, and where users can see it. One scholar said, social media sites “create rules and systems to curate speech out of a sense of corporate responsibility, but also . . . because their economic viability depends on meeting users’ speech and community norms.”[x] Speech that is posted on the internet “exists in an architecture of privately owned websites, servers, routers, and backbones,” and the existence online is governed by the policies of private companies.[xi] Ten years ago, First Amendment scholars predicted that the future of freedom of speech will not occur in constitutional law, but will instead occur with decisions made about technological design, legislative and administrative regulations, the formation of new business models, and the collective activities of end-users.[xii]
Recently, social media companies have come under heavy scrutiny regarding the type of user content they allow to be on their sites and the user content that they censor from their sites. Many people believe that these platforms do not do enough to rid the site of hate speech and misinformation.[xiii] On the other hand, many people believe that these platforms censor too much legitimate content and ban users who do not share the same views as other members of the site.[xiv] In a hearing with Facebook COO Sheryl Sandberg, she expressed the difficulty in determining speech that would and would not violate company standards barring hate speech.[xv] Dorsey and Mark Zuckerberg, founder of Facebook, both have been asked by House and Senate committees to respond to allegations detailing political bias on their platforms’ content moderation decisions.[xvi] Legislators and scholars have questioned the intentions and motives of these platforms and some have called for regulation of these social media sites, primarily focused on the way these sites police content.[xvii]
Existing federal law provides social media sites with certain protections when it comes to the treatment of user content. The platforms’ decisions to remove or to host content is protected from lawsuits because of the First Amendment being applied only to governmental actors and not private companies,[xviii] and Section 230 of the Communications Decency Act, which protects social media companies from being held liable under federal or state law for their decisions.[xix]
The first protection given to social media platforms is the Free Speech Clause of the First Amendment. This clause provides that “Congress shall make no law . . . abridging the freedom of speech”[xx] and applies to the States through the Fourteenth Amendment. Therefore, the First Amendment applies only against government action and not private entities. Many plaintiffs have argued that social media platforms should be classified as state actors since these sites act like a public forum and a place where individuals could obtain news and debate one another. These public functions were traditionally and exclusively performed by the government. Many lower courts have rejected this argument and have stated that just because social media companies hold their network open for use by the public, this is insufficient to make them subject to the First Amendment.[xxi] For example, in Cyber Promotions v. American Online (AOL), the district court held that although AOL opened its services to the public by connecting to the internet, it has not opened its property to the public by performing any municipal power or essential public service and, therefore, cannot be considered a state actor.[xxii] The Court goes on to say that because there are alternative avenues for users to send messages, AOL’s service was not essential and therefore the First Amendment should not apply.[xxiii] The Court has sometimes applied the First amendment against private companies when “there is a sufficiently close nexus between the State and the challenged action of the regulated entity so that the action of the latter may be fairly treated as that of the State itself.”[xxiv] Lower courts have uniformly agreed that social media sites lack this entwinement with the government and that the First Amendment does not prevent social media providers from restricting users’ ability to post content on their networks.
The second protection given to social media platforms is Section 230 of the CDA. Section 230 gives social media platforms broad immunities from suits when they decide not to remove content. This means if a plaintiff claims that by publishing certain content the site committed defamation, negligence, or violated a state securities law the suit would be barred.[xxv] Section 230 distinguishes between “interactive computer services” and “information content providers.”[xxvi] An interactive computer service is “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.[xxvii] Courts have considered platforms such as Facebook, Twitter, and Craigslist to be “interactive computer service” providers.[xxviii] Because of this, Section 230 offers broad immunity to these platforms when a litigant seeks to hold them liable for publishing, or not publishing a user’s content.[xxix] Section 230(c)(1) provides immunity for social media platforms from liability for hosting content.[xxx] Section 230(c)(2) provides social media sites with immunity when these sites take good faith action to restrict access to content that the provider or users deem “objectionable.”[xxxi] In situations where social media companies would be liable under existing law for their decisions regarding publishing or restricting access to user content, these suits would be barred under Section 230.[xxxii]
Freedom of speech is a fundamental human right that allows people to express their opinion and speak freely, which is essential to bring about change in society. People have always been leery of government regulation of free speech, but have allowed private companies to exercise complete control of it. Facebook, Instagram, and Twitter provide a platform for millions of users to express themselves, debate with one another, and share information. The catch to these platforms is that you must express yourself in ways these platforms allow according to their guidelines. Many people disagree with this policy, as some believe that certain content should not be on the website for inciting hate and others believe content was banned or censored without good reason. Regardless of the argument, people must realize that social media platforms have been given broad immunities that make them virtually untouchable when making decisions to publish, not publish, or censor content. In our pursuit to protect freedom of speech from government regulation, we are allowing the government to use social media as their proxy to regulate and control speech.
[i] Virginia v. Hicks, 539 U.S. 113, 119 (2003).
[ii] Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting).
[iii] Packingham v. North Carolina, 137 S. Ct. 1730, 1735–36 (2017).
[iv] See, e.g., Aaron Smith & Monica Anderson, Social Media Use in 2018, Pew Research Ctr. (Mar. 1, 2018), https://pewrsr.ch/2FDfiFd.
[v] See, e.g., Paul Hitlin & Lee Rainie, Facebook Algorithms and Personal Data, Pew Research Ctr. (Jan. 16, 2019), https://pewrsr.ch/2Hnqr1o; Will Oremus, Twitter’s New Order, Slate (Mar. 5, 2017), https://bit.ly/2lMs0pU.
[vi] Lisa Jones, Does TikTok Own My Content & Videos?, Mangoful, https://mangoful.com/does-tiktok-own-content-videos/ (explaining TikTok is allowed to use user content in their advertisement’s royalty free and can change your content however they deem fit.).
[vii] See, e.g., Kalev Leetaru, What Does It Mean For Social Media Platforms To “Sell” Our Data?, Forbes (Dec. 15, 2018), https://bit.ly/2W5MfRL; Louise Matsakis, Facebook’s Targeted Ads Are More Complex Than It Lets On, Wired (Apr. 25, 2018, 4:04 PM), https://bit.ly/2DZ1CG0.
[viii] See, e.g., Foreign Influence Operations’ Use of Social Media Platforms: Hearing Before the S. Select Comm. on Intelligence, 115th Cong. (2018) [hereinafter Hearing on Foreign Influence Operations] (statement of Jack Dorsey, CEO of Twitter) (“[W]e believe that the people use Twitter as they would a public square and they often have the same expectations that they would have of any public space. For our part, we see our platform as hosting and serving conversations.”); Facebook, Social Media Privacy, and the Use and Abuse of Data: Hearing Before the S. Comm. on the Judiciary and the S. Comm. on Commerce, Sci. & Transp., 115th Cong. (2018) (statement of Mark Zuckerberg, CEO of Facebook) (“[W]e consider ourselves to be a platform for all ideas.”).
[ix] Hearing on Foreign Influence Operations, supra note 8.
[x][x] Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1625 (2018).
[xi] Jonathan Peters, The “Sovereigns of Cyberspace” and State Action: The First Amendment’s Application—or Lack Thereof—to Third-Party Platforms, 32 Berkeley Tech L.J. 989, 990 (2017).
[xii] Jack M. Balkin, Free Speech and Press in the Digital Age: The Future of Free Expression in a Digital Age, 36 Pepp. L. Rev. 427, 427 (2009).
[xiii] E.g., Anne Applebaum, Regulate Social Media Now. The Future of Democracy is at Stake., Wash. Post, (Feb. 1, 2019), https://wapo.st/2T4SmYS; John Carroll & David Karpf, How Can Social Media Firms Tackle Hate Speech?, Knowledge at Wharton U. Penn. (Sept. 22, 2018), https://whr.tn/2DuFeFH; David Dayen, Ban Targeted Advertising, The New Republic (Apr. 10, 2018), https://bit.ly/2GRrT7z; Danielle Kurtzleben, Did Fake News On Facebook Help Elect Trump? Here’s What We Know, NPR (Apr. 11, 218, 7:00 AM), https://n.pr/2GPdMDP; Caroline O’Donovan & Logan McDonald, YouTube Continues to Promote Anti-Vax Videos as Facebook Prepares to Fight Medical Misinformation, Buzzfeed News (Feb. 20, 2019), https://bit.ly/2IWjIN5. Cf., e.g., Mehreen Khan, More ‘Hate Speech’ Being Removed from Social Media, Financial Times (Feb. 2, 2019), https://on.ft.com/2EO6YCR.
[xiv] Eli Rosenberg, Facebook Censored a Post for ‘Hate Speech.’ It Was the Declaration of Independence., Wash. Post. (July 5, 2018), https://wapo.st/2SRj4zN; Liam Stack, What Is a ‘Shadow Ban,’ and Is Twitter Doing It to Republican Accounts?, N.Y. Times (July 26, 2018), https://nyti.ms/2NRWLYC.
[xv] Hearing on Foreign Influence Operations, supra note 15. See also, e.g., Jason Koebler & Joseph Cox, The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People, Vice Motherboard (Aug. 23, 2018, 1:15 PM), https://bit.ly/2wtRboq.
[xvi] See, e.g., William Cummings, Republican Lawmakers Go after Facebook CEO Zuckerberg for Anti-Conservative Bias, USA Today (Apr. 11, 2018), https://bit.ly/2BTQ5VH; Mattathias Schwartz, Facebook and Twitter’s Rehearsed Dance on Capitol Hill, The New Yorker (Sept. 6, 2018), https://bit.ly/2VqgDXG.
[xvii] Katy Steinmetz, Lawmakers Hint at Regulating Social Media During Hearing with Facebook and Twitter Execs, Time (Sept. 5, 2018), https://bit.ly/2Rv4tgX.
[xviii] See, e.g., Peters, supra note 14, at 992.
[xix] See, e.g., Eric Goldman, Online User Account Termination and 47 U.S.C. § 230(c)(2), 2 U.C. Irvine L. Rev. 659, 662 (2012).
[xx] U.S. Const. amend. I.
[xxi] See, Prager Univ. v. Google LLC, No. 17-cv-06064-LHK, 2018 U.S. Dist. LEXIS 51000, at *23–24 (9th Cir. Cal. Feb. 26, 2020); Quigley v. Yelp Inc., No. 17-cv-03771-RS, 2017 U.S. Dist. LEXIS 103771, at *4 (N.D. Cal. Jan. 22, 2018); Estavillo v. Sony Computer Entm’t Am. Inc., No. C-09-03007 RMW, 2009 U.S. Dist. LEXIS 86821, at *4 (N.D. Cal. Sept. 22, 2009).
[xxii] Cyber Promotions, 948 F. Supp. at 442 (internal quotation mark omitted).
[xxiii] Id. at 443.
[xxiv] Jackson v. Metro. Edison Co., 419 U.S. 345, 351 (1974).
[xxv] 47 U.S.C. § 230.
[xxvi] 47 U.S.C. § 230(f).
[xxvii] Id. § 230(f)(2).
[xxviii] E.g., Klayman v. Zuckerberg, 753 F.3d 1354, 1357 (D.C. Cir. 2014); e.g., Fields v. Twitter, 217 F. Supp. 3d 1116, 1121 (N.D. Cal. 2016); E.g., Chicago Lawyers’ Comm. for Civil Rights under Law, Inc. v. Craigslist, Inc., 519 F.3d 666, 671 (7th Cir. 2008).
[xxix] 47 U.S.C. § 230(c).
[xxx] Id. § 230(c)(1).
[xxxi] Id. § 230(c)(2).
[xxxii] See, e.g., Klayman v. Zuckerberg, 753 F.3d 1354, 1358–59 (D.C. Cir. 2014) (discussing scope of immunity provided by Section 230).