United States

Free
76
100
A Obstacles to Access 21 25
B Limits on Content 30 35
C Violations of User Rights 25 40
Last Year's Score & Status
76 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

The internet in the United States remains vibrant, diverse, and largely free from government censorship, and the country’s legal framework provides some of the world’s strongest protections for free expression online. Increased federal investment in internet affordability programs has brought service to more people in recent years. A proliferation of electoral content that was false, misleading, or conspiracist created an unreliable online information environment and harmed public confidence ahead of the November 2022 midterm elections. The country still lacks a comprehensive federal privacy law, and Congress has failed to adequately reform disproportionate surveillance practices. State governments are increasingly pursuing legislation related to social media and data privacy; some laws passed during the coverage period effectively undermined access to information and free expression in the relevant states.

The United States is a federal republic whose people benefit from a competitive political system, a strong rule-of-law tradition, robust freedoms of expression and religious belief, and a wide array of other civil liberties. However, in recent years its democratic institutions have suffered erosion, as reflected in rising political polarization and extremism, partisan pressure on the electoral process, bias and dysfunction in the criminal justice system, harmful policies on immigration and asylum seekers, and growing disparities in wealth, economic opportunity, and political influence.

header2 Key Developments, June 1, 2022 – May 31, 2023

  • Some federal and state lawmakers considered restrictions on the short-video platform TikTok, which is owned by the China-based company ByteDance, due to concerns about potential threats to national security and the risk that the Chinese government could access Americans’ personal data. In May 2023, the state of Montana passed a law compelling online app stores to restrict access to TikTok within its territory; the measure was set to go in effect in 2024 and has been challenged in several lawsuits (see B2 and B3).
  • Also in May 2023, the Supreme Court rejected a claim seeking to hold internet platforms liable for terrorist content in Twitter v. Taamneh, and it declined to address questions about Section 230 of the Communications Decency Act in Gonzalez v. Google LLC (see B3).
  • Lawmakers in several states passed legislation requiring companies to limit young people’s access to social media, pornography, or other content labelled as harmful, including through the use of age-verification systems, which raised concerns about anonymity. Separately, the Supreme Court considered whether to accept a petition regarding state-level laws in Florida and Texas that would limit social media companies’ ability to moderate content according to their terms of service and platform policies (see B3 and C4).
  • Ahead of and during the November 2022 midterm elections, the online environment was riddled with false information and conspiracy theories about ballot collection and tallying, as well as egregious harassment aimed at election workers and officials (see B5, B7, and C7).
  • In March 2023, an executive order signed by President Joseph Biden barred federal agencies from the “operational” use of commercial spyware products that pose a threat to national security or counterintelligence, or that could be employed by foreign governments to violate human rights or target people from the United States (see C5).
  • A government privacy watchdog disclosed during the coverage period that agents from the Federal Bureau of Investigation (FBI) had improperly searched Americans’ communications collected under Section 702 of the Foreign Intelligence Surveillance Act (FISA), including those of an unnamed US senator and a state senator, people who joined 2020 racial justice protests, and donors to an unidentified congressional campaign (see C5).

A Obstacles to Access

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

The United States has the third-largest number of internet users in the world,1 but penetration rates and broadband connection speeds are lower than in other economically developed countries.2 The International Telecommunication Union reported a penetration rate of 91.8 percent in 2021.3 The speed-testing company Ookla reported a median US fixed-line broadband download speed of 198.17 Mbps (megabits per second) as of February 2023, ranking the country ninth worldwide.4 The median mobile download speed was 82.27 Mbps, making it the 19th fastest in the world.

Infrastructural problems and severe weather have sometimes undermined internet access for US residents (see A2).5 For example, Hurricane Ian hit Florida in September 2022, causing half a million Americans to lose telecommunications services because of infrastructure damage.6 Outages for several thousand people extended into November of that year.7

Various federal programs have modernized the nation’s telecommunications networks.8 Implementation of the Infrastructure Investment and Jobs Act (IIJA) began in June 2022.9 The 2021 law appropriated $65 billion to broadband expansion efforts and established the Affordable Connectivity Program (ACP) and Broadband Equity, Access, and Deployment (BEAD) Program to increase high-speed internet deployment to unserved and underserved communities.10

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 2.002 3.003

Older members of the population, those with disabilities or less education, households with lower socioeconomic status, and people living in rural areas or on tribal lands tend to face the most significant barriers to internet access.1 High costs, inadequate infrastructure,2 and limited provider options also impede access (see A4).3

The cost of broadband internet access in the United States exceeds that in many countries with similar penetration rates, creating an “affordability crisis,” according to New America’s Open Technology Institute.4 The National Telecommunications and Information Administration (NTIA) Internet Use Survey, last fielded in 2021, showed that only 50 percent of people with an annual income below $25,000 have both broadband and mobile data plans, compared with 80 percent of those making more than $100,000 per year.5

People living on tribal lands are among the least connected in the country.6 The NTIA reported that only 49 percent of residents on tribal lands had fixed-line home internet service as of 2022.7 Broadband expansion rates lag in these communities compared with other rural areas.8

Older residents use the internet at lower rates than the rest of the population. In 2021, researchers found that about 36 percent of US seniors did not have access to broadband connections at home.9 Black and Hispanic adults report disparities in device use and access to high-speed internet service.10

Increasing broadband access and affordability remains a priority for lawmakers. The Affordable Connectivity Program (ACP), run by the Federal Communications Commission (FCC), enrolled 18.2 million out of 51.6 million ACP-eligible households, of which 17.7 million are unconnected, between December 2021 and June 2023.11 The ACP provides a monthly subsidy for internet services of up to $30 for eligible households, and up to $75 for households on tribal lands. Support for the ACP has been widespread, but the $65 billion from the initial appropriation was projected to run out at the beginning of 2024.

The FCC’s Lifeline program has also provided long-term assistance to reduce the cost of telecommunications services. In addition, the NTIA announced the award of $25.7 million to two tribal nations in March 2023 as part of the Tribal Broadband Connectivity Program (TBCP), increasing the total grant-making of the program to over $1.35 billion.12

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

The US government imposes minimal restrictions on the public’s ability to access the internet. Private telecommunications companies own and maintain the backbone infrastructure, and there are multiple connection points to the global internet, making a government-imposed disruption of service highly unlikely and difficult.

Law enforcement agencies have previously limited internet connectivity in emergency situations. In 2011, San Francisco’s Bay Area Rapid Transit (BART) authorities restricted mobile internet and telephone service on its train platforms ahead of a planned protest against a fatal shooting by the transit police.1

Standard Operation Procedure 303, approved by a federal task force in 2006, establishes guidelines for wireless network restrictions during a “national crisis.”2 What constitutes a “national crisis,” and what safeguards exist to prevent abuse, remain largely unknown. In 2014, the FCC clarified that it is illegal for state and local law enforcement agencies to jam mobile networks without federal authorization.3

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 4.004 6.006

The broadband industry in the United States has grown more concentrated over time. An estimated 83 million people have access to only one broadband provider in their area.1 These de facto local monopolies have exacerbated concerns about high cost and accessibility.2

Comcast leads the fixed-line broadband market, providing more than 29.8 million households with internet services.3 Its chief competitor, Charter Communications, serves 28.3 million households.4 Following a decade of consolidation, three national providers—AT&T, Verizon, and T-Mobile—dominate the mobile service market.

Consolidation of the telecommunications sector has undermined consumer protection and choice. In 2019, the US Court of Appeals for the District of Columbia Circuit upheld AT&T’s acquisition of the media and entertainment company Time Warner,5 despite the Justice Department’s challenge to the merger.6 Less than a year later, reports of financial problems at AT&T surfaced, with customers facing price increases.7 Separately, antitrust experts have called for the reversal of a controversial 2019 merger between T-Mobile and Sprint, another mobile service provider.8

Regulations in 16 states undermine the creation and operation of municipal or publicly owned broadband providers, which have the potential to challenge market consolidation, deliver higher-quality and more affordable service, and reach underserved communities, according to research from BroadbandNow.9 The state of Colorado repealed such limitations in May 2023.10 Legislation granting government entities the authority to offer broadband services was passed in Arkansas and Washington State in 2021, and in New York and Maine in 2022.11

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 3.003 4.004

The FCC is tasked with regulating radio and television broadcasting, interstate communications, and international telecommunications that originate or terminate in the United States. It is formally an independent regulatory agency, but critics on both sides of the political spectrum argue that it has become increasingly politicized in recent years.1

The agency is led by five commissioners nominated by the president and confirmed by the Senate, with no more than three commissioners from one party. Jessica Rosenworcel, a commissioner who was originally nominated by former president Barack Obama, was confirmed as the first woman chair of the FCC in December 2021.

A commission vacancy that dated to early 2021 deprived the FCC of a tie-breaking vote throughout the coverage period, limiting regulatory progress on key internet freedom issues such as net neutrality. In May 2023, President Biden nominated Anna Gomez, a telecommunications lawyer then serving in the State Department, to the seat. The Senate confirmed Gomez in September, after the coverage period.2 Biden’s previous nominee, Gigi Sohn, had withdrawn from the process in March 2023 after her nomination stalled for 17 months amid intense political opposition, during which Sohn faced homophobic smears and partisan criticism of her work on digital rights.3

The FCC manages the ongoing process of broadband mapping, which determines the distribution of federal funding to states through BEAD and other programs.4 FCC maps released in December 2022 were criticized by senators, state governments, and public interest technology groups as inaccurate, potentially limiting the funding that states would receive to expand high-speed internet service in unserved or underserved areas.5

Other government agencies, such as the Department of Commerce’s NTIA, play advisory or executive roles on telecommunications, economic, and technology policies. The IIJA, an infrastructure spending measure adopted in 2021, tasked the NTIA with managing the BEAD program (see A1 and A2).6 The Federal Trade Commission (FTC) is an independent agency that oversees consumer protection and antitrust efforts, including in the technology sector. The Department of Agriculture is also an important source of funding for broadband initiatives and wields significant influence on policy.7

In 2017, the FCC repealed its 2015 Open Internet Order, often referred to as the net neutrality rule, weakening its regulatory authority over internet service providers (ISPs).8 The agency then instituted the Restoring Internet Freedom Order,9 effectively allowing ISPs to speed up, slow down, or restrict the traffic of selected websites or services at will. Civil society and public interest groups argued that these changes disadvantaged consumers in various ways,10 and that the FCC had abandoned its responsibility to protect a free and open internet (see B6).11

B Limits on Content

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content, particularly material that is protected by international human rights standards? 6.006 6.006

In general, the government does not force ISPs or content hosts to block or filter online material that would be considered protected speech under international human rights law.

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content, particularly material that is protected by international human rights standards? 3.003 4.004

The government does not directly compel content hosts to censor political or social viewpoints online, though intermediaries can face liability for not restricting certain types of content, such as copyright infringements and child sexual abuse material (CSAM), after becoming aware of it. Broadly speaking, content hosts and social media platforms are the primary decision-makers when it comes to the provision, retention, or moderation of prohibited online content in the United States (see B3).

In June 2021, President Biden rescinded August 2020 orders by former president Donald Trump that would have effectively banned WeChat, a messaging application, and TikTok, the short-video platform, on the grounds that they presented threats to national security; both are owned by China-based companies.1 Federal courts had already blocked implementation of Trump’s orders, citing free speech concerns.2 Biden’s new order directed the Department of Commerce to evaluate the potential national security risks associated with applications that are owned, controlled, or managed by “foreign adversaries.”3 In November 2021, the department released proposed rules that would require third-party audits of such apps;4 the rules remained under review as of May 2023.

The interagency Committee on Foreign Investment in the United States (CFIUS) continued its review of TikTok during the coverage period.5 In March 2023, for example, it was reported that CFIUS had proposed a plan under which ByteDance would divest from TikTok.6

In May 2023, Montana passed a law that would prohibit app store providers from making TikTok accessible for state residents starting in January 2024. State officials cited concerns that that the Chinese government could use the app to access Americans’ personal data. The law imposes daily fines of $10,000 for app stores that do not comply.7 Several lawsuits questioning the law’s constitutionality were filed after it was passed, particularly on free-speech grounds; one suit was filed by five TikTok creators and financed by the company.8

Section 230 of the Communications Act, as amended by the Telecommunications Act of 1996—commonly known as Section 230 of the Communications Decency Act—remained a subject of debate among policymakers during the coverage period (see B3). The law shields online providers and content hosts from legal liability for most material created by users, including lawsuits alleging defamation or injurious falsehoods.9 However, there are exceptions to this immunity under federal criminal law, intellectual-property law, laws to combat sex trafficking, and laws protecting the privacy of electronic communications. Judicially recognized exceptions for claims resulting from platforms’ own actions also exist. In July 2022, two judges—one in a case against the Snapchat messaging platform and another in a case against the video chat site Omegle—released conflicting decisions about whether Section 230 protected companies from legal liability as it relates to their product design, in addition to user-generated content.10 Section 230 also ensures legal immunity for social media companies and other content providers that remove content when it violates their terms and conditions of service or their community guidelines.11

The 2018 Allow States and Victims to Fight Online Sex Trafficking Act, also referred to as SESTA/FOSTA, established new liability for internet services when they are used to promote or facilitate the prostitution of another person.12 After the bill passed in the Senate, but before it became law, reports emerged of companies preemptively censoring content: Craigslist announced that it was removing the “personals” section from its website altogether.13 Civil society activists criticized the law for motivating companies to engage in excessive censorship in order to avoid legal action.14 Sex workers and their advocates also argued that the law threatened their safety, since the affected platforms had enabled sex workers to leave exploitive situations, operate independently, communicate with one another, and build protective communities.15 In July 2023, the Court of Appeals for the District of Columbia Circuit rejected a constitutional free-speech challenge to SESTA/FOSTA, but narrowed the law’s scope to protect the speech of sex workers themselves, advocacy-related speech about prostitution in general, and the internet services that provide forums for such speech.16

Section 512 of the Digital Millennium Copyright Act (DMCA), enacted in 1998, created new immunity from copyright claims for online service providers. However, the law’s notice-and-takedown requirements have been criticized for impinging on speech rights,17 as they lack judicial oversight and may incentivize platforms to remove potentially lawful content. Research has shown how DMCA complaints have been filed to take down criticism, commentary, political campaign advertisements, and other speech that should be protected under international free expression standards.18

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 4.004 4.004

The government places few restrictions on online content, and existing laws do not allow for broad government blocking of websites or removal of content. Companies that host user-generated content, many of which are headquartered in the United States, have faced criticism for a lack of transparency and consistency when it comes to enforcing their own content moderation rules.

Section 230 of the Communications Decency Act generally shields online sites and services from legal liability for the activities of their users, allowing user-generated content to flourish on a variety of platforms (see B2).1 Despite robust legal and cultural support for freedom of speech in the United States, the scope of Section 230 has become a focus of criticism. Concerns about CSAM, defamation, cyberbullying and cyberstalking, terrorist content, and protection of children from harmful or indecent material have contributed to calls for reform of the platforms’ legal immunity for user-generated content, as have complaints that platforms are “over-moderating” certain political viewpoints.

Federal lawmakers have proposed numerous bills that would reform Section 230 and increase intermediaries’ liability for the content they host.2 The draft Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act was reintroduced in Congress in April 2023, drawing fresh backlash from technology experts and civil society organizations.3 The bill has been repeatedly amended, but in some versions it would require that providers adopt “best practices” for detecting and combating CSAM on their platforms, or otherwise risk losing Section 230 protections and being held liable for such content. Critics have warned that, as written, the legislation would incentivize providers to censor excessively and suppress online speech, and could also undermine companies’ use of end-to-end encryption (see C4).4 Civil society groups have raised similar concerns about the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment (STOP CSAM) Act, an amended version of which was moved out of committee in May 2023; it would also establish a Section 230 carveout for platforms that fail to take measures to counter CSAM.5

The draft Platform Accountability and Consumer Transparency (PACT) Act, initially introduced in 2020 and reintroduced in February 2023 with a few changes,6 would require online platforms to provide expanded explanations of their content moderation practices and force them to adhere to court-mandated takedown orders.7 While the bill received recognition from some observers as a “serious” attempt to address problems with content moderation, civil society groups, industry representatives, and scholars have raised free-speech concerns, warned that the legislation’s takedown provision could be used for censorship, and noted that smaller platforms might lack the resources to remain in compliance.8

In May 2023, the Supreme Court issued two rulings on cases relating to platform liability for content posted by users.9 Gonzalez v. Google LLC raised questions about the scope of Section 230 in relation to algorithmically recommended content, which the court ultimately did not resolve.10 The ruling in Twitter v. Taamneh held that Twitter could not be held liable for aiding and abetting terrorist groups under a federal antiterrorism law.11

In September 2022, the Biden administration called for reforms to Section 230 for large tech platforms.12 In May 2021, the administration had rescinded an executive order by former president Trump that was meant to limit protections for platforms.

There were multiple federal efforts to restrict access to TikTok during the coverage period. In March 2023, a bipartisan group of senators introduced the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which was subsequently endorsed by the White House.13 The bill would empower the secretary of commerce to “identify, deter, disrupt, prevent, prohibit, investigate, or otherwise mitigate” the national security risks posed by the ownership of technology products by “foreign adversaries.” The bill designates China, Cuba, Iran, North Korea, Russia, and Venezuela as foreign adversaries and allows the secretary to add or remove countries, subject to congressional review. The RESTRICT Act would also authorize the government to “compel divestment” of a foreign adversary–owned technology company.14 The RESTRICT Act sparked significant pushback from civil society groups due to concerns about violations of the constitution’s First Amendment free-speech clause and the risk of setting a precedent for government censorship.15 Other bills introduced in early 2023—like the Deterring America’s Technological Adversaries (DATA) Act, the Averting the National Threat of Internet Surveillance, Oppressive Censorship and Influence, and Algorithmic Learning by the Chinese Communist Party (ANTI-SOCIAL CCP) Act, and the No Funds for Enablers of Adversarial Propaganda Act—also sought to limit access to TikTok in the United States.16

A bipartisan group of senators reintroduced the Kids Online Safety Act (KOSA) in May 2023. It would establish a wide range of obligations for social media platforms relating to children, including disclosures around targeted advertising and standardized parental controls.17 Several civil society organizations have raised concerns that the law’s authorization of state attorneys general to enforce a “duty of care” standard on purported harms to children could be abused to restrict access to online content related to reproductive health care and the rights of LGBT+ people.18

Citing concerns about child safety online, lawmakers in several states—California in September 2022,19 Arkansas in April 2023,20 Utah in May 2023,21 and Texas and Louisiana in June 202322 —passed measures that require companies to limit young people’s access to social media without parental consent.23 Civil society advocates warned that the laws, which require platforms to impose age-verification restrictions, would compromise peoples’ privacy (see C4), and that parental oversight provisions could limit young people’s access to helpful information that their parents may not support, such as information about the LGBT+ community.24

Separately, legislators in a number of states—including Arkansas in April 2023,25 Utah and Virginia in May 2023,26 and Texas in June 202327 —passed restrictions specifically aimed at preventing minors from accessing pornography or other content labelled as harmful. The laws require or incentivize companies to implement age verification measures to limit such access (see C4). Adult website operator MindGeek has blocked access to its websites for all users in Utah, Arkansas, Virginia, and other states in response to the laws.28

Lawmakers in several other states—including Florida, Texas, Ohio,29 Kentucky, Arizona, and North Dakota30 —have proposed or passed their own bills to regulate social media companies’ content moderation practices. Critics have argued that various laws that do not explicitly restrict content do so in function. In February 2023, a New York court suspended a state law passed in June 2022 that sought to require online platforms to create a system for users to report “hateful conduct” and publish their policies for responding to such reports; the court ruled that the law effectively compelled speech.31

In January 2023, the Supreme Court requested the US solicitor general’s opinion on appeals regarding the Florida and Texas laws that limit content moderation.32 The court agreed to hear the case in September, after the coverage period.33 In May 2022, the US Court of Appeals for the 11th Circuit had struck down most of Florida’s law, which threatened platforms with large fines if they failed to carry the vast majority of content posted by political candidates or broadly defined “journalistic” organizations.34 The appellate court held that companies’ content moderation practices amount to speech protected under the First Amendment of the US constitution.35 In September 2022, however, the US Court of Appeals for the Fifth Circuit upheld the Texas law, which allows Texans to sue social media platforms with over 50 million active users in the United States for allegedly moderating content in a discriminatory manner based on “the viewpoint” of a user. Many legal experts, industry groups, and civil society organizations condemned the Fifth Circuit court’s ruling as inconsistent with Supreme Court precedent.36

Following the Supreme Court’s June 2022 decision in Dobbs v. Jackson Women’s Health Organization to overturn a 1973 precedent and rule that the constitution did not guarantee a right to abortion, state lawmakers in Texas introduced a bill that establishes procedures for citizens to file civil court cases against social media platforms and ISPs hosting abortion-related content. The bill, introduced in March 2023, would incentivize ISPs and content hosts to avoid civil liability under the law by restricting access to content that could facilitate abortions.37 Lawmakers in Iowa included a similar provision in an antiabortion bill introduced in February 2023.38

Several news outlets reported that in the wake of the Dobbs decision, Facebook and Instagram had removed posts that discussed abortion pills, including general information on how to legally obtain the medication through the mail as well as offers from users to provide the pills to people who live in states with restrictive abortion laws.39 Meta, the two platforms’ parent company, acknowledged incorrect enforcement of its policies in June 2022,40 while NBC News reported in the same month that Instagram had also limited search results for posts that included the terms or hashtags “abortion pills” and “mifepristone,” a common abortion medication.41 WIRED reported in June 2023 that TikTok similarly removed videos sharing information about abortion pills.42

With the exception of the 11th Circuit ruling to uphold Texas’s law, tech companies have successfully argued that moderation decisions are an exercise of their own constitutionally protected right to set platform policies, allowing them to remove content and accounts that violate their rules. Social media platforms reversed their suspensions of former president Trump’s accounts during the coverage period, including Twitter in December 2022,43 Facebook and Instagram in January 2023,44 and YouTube in March 2023.45 The firms restricted the accounts in January 2021 after Trump repeatedly violated platform policies by posting baseless claims about mail-in ballots and voter fraud,46 among other infractions.47

Government efforts to influence platforms on content moderation have drawn legal scrutiny. In July 2023, after the coverage period, a federal judge in Louisiana issued a preliminary injunction that prohibited the Biden administration from contacting social media companies to request “the removal, deletion, suppression, or reduction of content containing protected free speech," with exceptions relating to illegal activity and national security. The injunction, which was issued as part of a First Amendment case filed by the state attorneys general of Missouri and Louisiana in response to the federal government’s efforts to counter false and misleading information during the 2020 elections and the COVID-19 pandemic, also prohibited government communications with three academic research groups: the Election Integrity Partnership, the Virality Project, and the Stanford Internet Observatory.48 In September 2023, the Fifth Circuit panel upheld a narrowed version of the injunction, applying it to a short list of government agencies and to any efforts to “coerce or significantly encourage” specific content moderation actions.49

Facebook, Twitter, YouTube, and other major platforms have faced criticism for insufficient transparency regarding the enforcement of their respective community standards, as well as for the effects of this enforcement on marginalized populations.50 A number of studies and independent audits have identified cases of racial, gender, and other forms of discrimination in the platforms’ content moderation and advertising policies that affected the speech of people in the United States.51 Twitter faced particular criticism for moderation and policy decisions that were made after the company was acquired by tech investor and entrepreneur Elon Musk in October 2022, such as the suspension of several journalists’ accounts in response to their reporting on Musk.52

Throughout late 2022 and 2023, Musk facilitated the disclosure of internal company communications—which he dubbed “the Twitter Files”—to a network of journalists and commentators. The disclosures largely detailed Twitter’s past content moderation practices and interactions with the US government, such as requests from government agencies and political figures to remove false information or abusive content.53 Critics charged that the disclosures were selective and politically motivated.54

Companies that serve as providers of internet infrastructure enforce their own discretionary speech policies. In September 2022, the web-security and content-delivery firm Cloudflare stopped providing services to Kiwifarms, an online forum that had facilitated egregious harassment and contributed to offline harms including suicide.55

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Reports of self-censorship among journalists, commentators, and ordinary internet users are not pervasive in the United States. Women, LGBT+ people, and members of other marginalized communities are frequent targets of online harassment and abuse, which can encourage self-censorship (see C7). Government surveillance practices may also contribute to self-censorship.1

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 2.002 4.004

False, manipulated, and misleading information is disseminated by both foreign and domestic entities in the United States. While sources from across the political spectrum deliberately spread such information,1 multiple academic studies and civil society research have shown that the tactic is disproportionately utilized by powerful figures on the right.2

Political actors spread false and misleading information about voting, electoral administration, and electoral integrity ahead of, during, and after the November 2022 midterm elections (see B7).3 Specifically, false narratives coalesced around electronic voting machines, counting procedures, vote-by-mail procedures, voting locations, and voting requirements. For example, according to the Washington Post, more than 100 Republican Party nominees for Congress or statewide office embraced false narratives about the 2020 presidential election result ahead of the November 2022 midterm elections.4 The nonpartisan Election Integrity Partnership (EIP) found that far-right social media influencers spread false information about ballot collection and tallying in key races on and after election day.5

The EIP reported that misleading or false claims regarding the 2020 presidential vote contributed to a single, larger metanarrative about a “stolen election.”6 Researchers determined that the false electoral narratives were primarily spread by right-wing social media influencers; hyperpartisan and fringe media outlets; right-leaning mainstream media outlets; and political figures, including former president Trump and his family members.7 The EIP similarly concluded that the surge in baseless allegations of electoral fraud online helped to propel the assault on the Capitol in Washington, DC, on January 6, 2021.8 The House of Representatives’ bipartisan Select Committee to Investigate the January 6th Attack on the US Capitol, launched in June 2021, also investigated false and misleading information spread online by former president Trump and his allies, reaching conclusions similar to those of the EIP.9

False and misleading information about the war in Ukraine following the Russian government’s February 2022 invasion continued to permeate the online space during the coverage period. Narratives emanating from Russian progovernment sources have been shared by some users in the United States, including prominent media and political figures.10 In April 2023, federal authorities charged four US citizens affiliated with the socialist Uhuru Movement for allegedly disseminating pro-Kremlin propaganda, including online and regarding the war in Ukraine; authorities also charged three Russian nationals.11 Researchers found that pro-Kremlin disinformation has increasingly spread on smaller platforms like Parler, Rumble, and Gab.12

Political actors have spread manipulated information about COVID-19 since the outbreak of the pandemic in early 2020.13 For instance, analysis from NBC News found that media featuring Democratic presidential candidate Robert Kennedy Jr. sharing false or misleading information about COVID-19 had spread across YouTube, Twitter, Spotify, and Rumble.14 Previously, reports from the Center for Countering Digital Hate and the Virality Project, a coalition of experts led by the Stanford Internet Observatory, found that the main spreaders of false or misleading information related to COVID-19 in 2021 were antivaccine and wellness influencers, popular conspiracy theorist accounts, right-leaning political figures, and “media freedom” influencers.15

US officials and experts have been particularly concerned about influence operations carried out by actors based in Russia, China, and Iran, including ahead of the November 2022 midterm elections and the 2024 general elections.16 Twitter identified six networks with links to China and Iran that were active in posting content about the midterms; the EIP found that the networks sought to amplify polarizing content aimed at right- and left-wing audiences.17 In August 2023, after the coverage period, Meta disclosed that it had removed a large network of accounts associated with a cross-platform campaign to spread positive commentary about China and criticism of the United States and perceived opponents of the Chinese government. Meta linked the network, which targeted audiences in the United States among many other regions, to individuals associated with Chinese law enforcement agencies. Meta also disclosed that an influence operation seeking to undermine support for Ukraine had extended its reach to US audiences in early 2023. The operation, which was attributed to organizations linked to the Russian government, spoofed the websites of Fox News and the Washington Post to promote articles criticizing Ukrainian president Volodymyr Zelenskyy and US policy on Ukraine, and disseminated the articles on social media platforms.18

Online news outlets in the United States are generally free from either formal arrangements or coercive mechanisms that compel them to provide favorable coverage of the government. Yet political and economic factors can sometimes intersect to incentivize a close relationship between a political party and a given news organization.19

Hyperpartisan news sites have played a central role in spreading false, misleading, and conspiratorial information to US audiences. For instance, analysis from the Institute for Strategic Dialogue found that the Gateway Pundit, a far-right blog, published 65 articles in the month after the 2022 elections that featured defeated Arizona Republican gubernatorial candidate Kari Lake’s conspiracist claims about election fraud, drawing election denialists’ attention to the Arizona balloting.20

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

There are no government-imposed economic or regulatory constraints on internet users’ ability to publish content. Online outlets and blogs generally do not need to register with, or have favorable connections to, the government to operate. Media sites can accept advertising from both domestic and foreign sources.

The Foreign Agents Registration Act (FARA) does not entail any direct restrictions on an outlet’s content or the ability to publish online, but it does require those that qualify as foreign agents to disclose their organizational structures and finances. US federal agencies have identified certain Chinese and Russian state media companies as “foreign missions” or “foreign agents,” and both designations come with certain reporting requirements and other limitations.1 In August 2021, the Justice Department required Sing Tao, a Hong Kong newspaper known for its pro-Beijing stance, to register as a foreign agent.2

Experts argue that the FCC’s 2017 repeal of the 2015 Open Internet Order could result in new constraints for those wishing to publish online (see A5).3 Under President Biden, proponents of net neutrality have been guardedly optimistic about the principle’s potential revival. In July 2021, Biden signed an executive order that contained several directives to develop stronger regulations related to net neutrality.4

Since 2018, numerous state legislatures, attorneys general, and civil society groups have also sought to restore net neutrality.5 In October 2019, a federal appeals court upheld the FCC’s repeal of the Open Internet Order,6 but it also ruled that the commission could not preemptively block states from enacting their own laws to safeguard net neutrality. Several states have successfully adopted laws or executive orders to that end.7

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity and reliability? 3.003 4.004

As a whole, the online environment in the United States is dynamic and diverse. People can easily find and publish content on a range of issues, covering a variety of communities, and in multiple languages. However, an upswelling of misinformation, hyperpartisan speech, and conspiracist content has threatened the information ecosystem in recent years, weakening trust in traditional media and government institutions and eroding the visibility and readership of more credible sources. Reports have also explored the ways in which the policies and algorithms of major platforms—including Facebook, Twitter, YouTube, and TikTok—have contributed to the promotion of misinformation.1

The integrity and reliability of online information has been undermined by the spread of electoral disinformation surrounding both the 2020 and 2022 election cycles (see B5).2 False and misleading information has driven calls to change how elections are administered at the state and local levels. Two election officials in Cochise County, Arizona, refused to certify election results after the 2022 midterms, referencing false claims about voting machines and, separately, false allegations of electoral fraud in Arizona’s Maricopa County that had circulated widely among election denialists online. The Cochise County officials eventually certified the results in December 2022 after a federal judge intervened.3

Research efforts have drawn the connection between online misinformation and weakening public confidence in US elections, as well as weakening trust in government more broadly.4 According to an Associated Press–NORC Center for Public Affairs Research poll published in July 2023, only 22 percent of Republicans believe that ballots will be accurately tallied in the 2024 presidential election; overall, the poll found that only 44 percent of Americans have “a great deal” or “quite a bit” of confidence in accurate ballot counts in the next election.5 A poll by the Pearson Institute and the Associated Press–NORC Center for Public Affairs Research found in October 2022 that 73 percent of adults believe misinformation increases extreme political views, 77 percent of respondents say misinformation increases hate crimes, and half of adults say misinformation reduces trust in government.6 An investigation by the website FiveThirtyEight found that election deniers would be listed as candidates on 60 percent of ballots for the 2022 midterms.7

Independent researchers' work on electoral and other misinformation has been hampered by allegations of political bias in platforms’ content moderation practices (see B3). A September 2023 report from the Center for Democracy and Technology identified a “chilling effect” among misinformation researchers that resulted from challenges such as targeted harassment campaigns and politicized investigations by a small number of policymakers.8 For example, in March 2023, Representative Jim Jordan sent letters to researchers at several universities requesting they provide documents about their participation in a purported “censorship regime.”9

The rise of conspiracist content online in the United States is a multiyear trend.10 According to a 2022 poll from the Public Religion Research Institute, nearly one in five Americans believe in QAnon—an online conspiracist movement alleging that key Democrats and other elites are part of an international cabal of pedophiles, and that former president Trump is a heroic leader against these forces of evil.11 Conspiracist content was linked to offline violence during the coverage period: for example, the perpetrator of an October 2022 assault on Paul Pelosi, husband of then House of Representatives speaker Nancy Pelosi, was a proponent of QAnon-linked conspiracy theories.12

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

There are no technical or legal restrictions on individuals’ use of digital tools to organize or mobilize for civic activism. However, surveillance of social media and communication platforms, targeted harassment and threats, and high costs and other barriers to internet access have sometimes undermined people’s ability to engage in online activism.

After the Supreme Court’s Dobbs decision was released in June 2022, many users took to social media to share their personal experiences with abortion and pregnancy, to express their opposition to or support for the decision, and to coordinate civic mobilization.1 During protests against the decision that erupted in major cities across the country, some online journalists reported facing police violence (see C7).

Throughout 2020 and 2021, many Americans organized online protests against racial injustice and to provide support for the Black Lives Matter movement after the police killings of Black civilians Breonna Taylor in Kentucky and George Floyd in Minnesota in 2020.2 Federal, state, and local law enforcement agencies increased their social media surveillance amid the protests.3 In April 2023, the Minneapolis chapter of the National Association for the Advancement of Colored People (NAACP) filed a lawsuit alleging that the Minneapolis Police Department’s social media surveillance practices during the protests had violated the constitutional rights of Black activists.4

Despite strong constitutional protections for the freedom to assemble, the International Center for Not-for-Profit Law has tracked numerous federal and state initiatives aimed at restricting that right from early 2017 to the end of the coverage period, including one Wisconsin legislative proposal, reintroduced in February 2023, that would broadly define incitement to riot and could criminalize legitimate online activity.5

C Violations of User Rights

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 6.006 6.006

The First Amendment of the federal constitution includes protections for free speech and freedom of the press. The Supreme Court has long maintained that online speech has the highest level of constitutional protection.1

In June 2021, the Supreme Court ruled in favor of a high school student who was suspended after posting, while not on school grounds, an image on Snapchat that used vulgarities to express frustration with her school and its cheerleading squad.2 The nearly unanimous decision found that the student’s speech was protected under the First Amendment, but the justices acknowledged some leeway for schools to regulate speech when it is genuinely disruptive in order to deal with bullying and related problems.3

A 2017 Supreme Court decision had also reaffirmed the protected status of online speech, arguing that to limit a person’s access to social media “is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”4

In February 2023, the US Court of Appeals for the Fourth Circuit ruled that civilians live-streaming police activity were engaging in protected speech (see C3).5 In 2017, other federal courts had upheld the right of bystanders to use their smartphones to record police actions.

In April 2023, the Supreme Court agreed to hear cases on the application of the First Amendment to situations in which social media users are blocked from commenting on the personal social media pages of government officials when those pages are used to communicate about government-related duties.6 The cases would be heard during the court’s 2023–24 term.7

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities, particularly those that are protected under international human rights standards? 2.002 4.004

Despite significant constitutional safeguards, laws such as the Computer Fraud and Abuse Act (CFAA) of 1986 have sometimes been used to prosecute online activity and impose harsh punishments. State-level laws also penalize online activity.

Aggressive prosecution under the CFAA has fueled criticism of the law’s scope and application. The act prohibits accessing a computer without authorization, but fails to define the terms “access” or “without authorization,” leaving the provision open to interpretation in the courts.1 Until recently, reform efforts were largely unsuccessful.2 In April 2020, however, a court narrowed the scope of the CFAA by ruling in favor of researchers who were concerned that their work, which involved scraping data from websites, ran afoul of the law.3 The bipartisan draft Platform Accountability and Transparency Act, introduced in June 2023, would protect researchers from CFAA claims, among other reforms to increase researcher access to platform data.4

In June 2021, the Supreme Court further limited the application of the CFAA and clarified the meaning of “unauthorized access.”5 The case, Van Buren v. United States, involved the conviction of a police officer who had accessed police databases for unofficial purposes.6 Following the decision, in April 2022, the US Court of Appeals for the Ninth Circuit ruled in hiQ v. LinkedIn that the CFAA likely does not necessarily bar people from scraping data from a public website, even if the website owner does not consent.7

Certain states have criminal defamation laws in place, with penalties ranging from fines to imprisonment.8 Among other state-level restrictions, Arizona governor Doug Ducey signed a law in July 2022 that made it a misdemeanor offense to film police from less than eight feet away following a verbal warning.9 A federal judge issued a preliminary injunction stopping enforcement of the law on First Amendment grounds in September 2022; the Arizona legislature subsequently declined to defend the law, letting it lapse.10

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities, particularly those that are protected under international human rights standards? 4.004 6.006

Prosecutions or detentions for online activities are neither frequent nor systematic. However, local police have investigated, arrested, and charged users for some actions. For instance, people using their mobile devices or social media accounts to document law enforcement activity have been temporarily detained; most face charges such as obstruction or resisting arrest. Due to strong legal protections for free expression, such cases are often dropped by prosecutors.

Several cases in which people were arrested in relation to their online activities proceeded through the courts during the coverage period. In March 2023, New Hampshire resident Robert Frese filed a petition with the Supreme Court in a civil suit over the state’s criminal defamation law, under which Frese had been arrested in 2018 for making disparaging online comments about a local police officer.1 In February 2023, the Supreme Court declined to hear an appeal in a lawsuit brought by Facebook user Anthony Novak, who argued that his 2016 arrest, for creating a page that parodied the local police department, had violated constitutional protections against censorship and unreasonable searches and seizures. A federal judge and an appellate court both ruled that the police officers’ conduct was protected under the controversial qualified immunity standard.2

Online journalists have been investigated, arrested, or charged while covering protests. During protests in support of the right to abortion following the Supreme Court’s Dobbs decision in June 2022, a few online journalists were temporarily detained by police, including a correspondent for the conservative news site El American.3 In April 2022, during the previous coverage period, an independent online photojournalist in Los Angeles was arrested and charged with ignoring police orders while documenting protests against a fatal police shooting.4

Online news outlets face other limits on their work. In October 2022, authorities in Pike County, Ohio, filed wiretapping charges against Derek Myers, editor in chief of the Scioto Valley Guardian news site. The Guardian had published the contents of a leaked audio recording of a murder trial in an article written by Myers; Myers was not present in the courtroom and did not record the audio. He was released on bail. Law enforcement also confiscated equipment belonging to Myers and the outlet, reportedly under an expired warrant.5 In April 2023, a magistrate convicted journalists Matilda Bliss and Veronica Coit of the news site Asheville Blade of trespassing; they had been arrested in December 2021 while covering police evictions of homeless people from an encampment in Asheville, North Carolina. Bliss and Coit appealed to a jury trial, which affirmed the conviction in June.6 The Wausau Pilot & Review, a Wisconsin news site, reported that it incurred prohibitive costs in defending against a defamation suit filed by a local businessman who objected to the outlet’s allegations that he had used a homophobic slur against a 13-year-old boy. A judge threw out the suit in April 2023.7

In May 2023, federal agents raided the home of Timothy Burke, a freelance online journalist, and seized his equipment. Subsequent disclosures indicated that the government planned to prosecute Burke for violating the CFAA and federal wiretapping laws; he had reportedly obtained unaired Fox News clips and shared them with the news outlet Vice News and the civil society organization Media Matters for America, but he argued that no hacking was involved.8

At times, officials have attempted to use legal cases to identify anonymous critics on the internet. In January 2023, the city of Beachwood, Ohio, sought to use a defamation lawsuit to identify an anonymous online critic; a judge dismissed the lawsuit in April 2023 on First Amendment grounds.9

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 3.003 4.004

There are no federal laws restricting anonymity on the internet, as the constitution protects the right to anonymous speech in many contexts. At least one state law that stipulates journalists’ right to withhold the identities of anonymous sources has been found to apply to bloggers.1

Online anonymity has been challenged in cases involving hate speech, defamation, and libel. In 2015, a Virginia court tried to compel the customer-review platform Yelp to reveal the identities of anonymous users, but the state’s Supreme Court ruled that the company did not have the authority to do so.2 In 2019, a federal court ruled that Reddit did not need to reveal the identity of one of its users to a plaintiff who was suing for copyright infringement.3

Laws at the state level have also weakened online anonymity. For instance, state-level child safety laws passed during or after the coverage period—including in California in September 2022,4 Arkansas in April 2023,5 Utah in May 2023,6 and Texas in June 20237 —would require covered platforms to employ age-verification technology to ensure that young people do not access their services without parental consent; the Arkansas law specifically mandates the use of third-party age-verification services.8 Civil society organizations have criticized such measures, which generally require people to provide a government-issued identity document, for undermining anonymity and exposing private information to potential misuse or theft.9

No legal limitations apply to the use of encryption, but both the executive and legislative branches have at times moved to challenge the technology.10 In 2020, the Justice Department issued a joint statement with the governments of the United Kingdom, Australia, New Zealand, Canada, India, and Japan, calling on Facebook and other tech companies to help enable government access to encrypted messages.11

The proposed EARN IT Act was reintroduced in Congress in April 2023 despite strong opposition from civil society organizations, which argued that the bill threatened the privacy and security of all users by discouraging end-to-end encryption.12 Similarly, civil society organizations have warned that the draft STOP CSAM Act, introduced in April 2023 and amended in May, would incentivize platforms to avoid end-to-end encryption by expanding the scope for civil liability (see B3).13

The degree to which courts can force technology companies to alter their products and enable government access is unclear. The Communications Assistance for Law Enforcement Act (CALEA) of 1994 requires telephone companies, broadband providers, and interconnected Voice over Internet Protocol (VoIP) providers to design their systems so that communications can be easily intercepted when government agencies have legal authority to do so, although it does not cover online communication tools such as Gmail, Skype, and Facebook.14

Federal law enforcement agencies sought to compel Apple to unlock the encrypted smartphones of alleged perpetrators following a terrorist attack in San Bernardino, California, in 2015,15 and an attack on a Navy facility in Florida in 2019.16 In both cases Apple resisted, and agents gained access by other means. A federal judge ruled in 2016 that CALEA did not allow the government to compel Apple to unlock an iPhone.17

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

The legal framework for government surveillance in the United States is open to abuse, and authorities have engaged in certain forms of monitoring, particularly on social media, with minimal oversight or transparency. The government’s search and seizure powers are generally limited by the constitution’s Fourth Amendment.

Laws governing foreign intelligence surveillance have in practice permitted the collection of data on US citizens and residents. Such surveillance is regulated in part by the USA PATRIOT Act, which was passed following the terrorist attacks of September 11, 2001.1 In 2015, then president Obama signed the USA FREEDOM Act, which extended expiring provisions of the PATRIOT Act, including broad authority for intelligence officials to obtain warrants for roving wiretaps of unnamed “John Doe” targets and surveillance of lone individuals with no evident connection to terrorist groups or foreign powers.2 At the same time, the new legislation was meant to end the government’s bulk collection of domestic call detail records (CDRs)—the metadata associated with telephone interactions—under Section 215 of the 2001 law. The bulk collection program, detailed in documents leaked by former National Security Agency (NSA) contractor Edward Snowden in 2013,3 had been ruled illegal by the US Second Circuit Court of Appeals earlier in 2015.4 Despite that year’s reforms, mass collection of CDRs reportedly continued, and the NSA recommended that Section 215 be allowed to expire, which it did in 2020.5 However, a “savings clause” allowed officials to continue using the authority for investigations that had begun before the expiration, or for new examinations of incidents that occurred before that date.6

Under the USA FREEDOM Act, the NSA—which focuses on foreign intelligence collection—is permitted to access US call records held by phone companies after obtaining an order from the Foreign Intelligence Surveillance Court, also called the FISA Court in reference to the 1978 Foreign Intelligence Surveillance Act.7 Requests for such access require use of a “specific selection term” (SST) representing an “individual, account, or personal device” that is suspected of being associated with a foreign power or international terrorist activity;8 this mechanism is intended to prevent broad requests for records based on an area code or other imprecise indicators. The definitions of SSTs vary, however, depending on the authority used, and civil liberties advocates have criticized them as excessively broad.9

The USA FREEDOM Act requires the FISA Court to appoint an amicus curiae in any case that “presents a novel or significant interpretation of the law,” so that judges are not forced to rely on the arguments of the government alone in weighing requests. However, the court can waive the requirement at its discretion. The panel of amici curiae includes experts on privacy, civil liberties, and communications technology.10 Five people are currently designated to serve.11

Other components of the US legal framework allow surveillance by intelligence agencies, but often without adequate oversight, specificity, and transparency. Section 702, adopted in 2008 as part of the FISA Amendments Act, authorizes the NSA, acting inside the United States, to collect the communications of any foreigner overseas as long as a significant purpose of the collection is to obtain “foreign intelligence,” a term broadly defined to include any information that “relates to … the conduct of the foreign affairs of the United States.”12 Section 702 surveillance involves both “downstream” collection, in which stored communications data—including content—are obtained from US technology companies, and “upstream” collection, in which the NSA collects users’ communications as they are in transit over the internet backbone.13 Although Section 702 only authorizes the collection of information pertaining to foreign citizens outside the United States, the communications of US citizens and residents are inevitably swept up in this process in large amounts, and these too are stored in a searchable database.14 Under a 2018 reauthorization of Section 702, FBI agents must obtain a warrant to review the content of communications belonging to an American who is already the subject of a criminal investigation; the warrant requirement was so narrow as to exclude the majority of queries.15 The reauthorization also imposed additional transparency measures relating to the authority.16

Section 702 was scheduled to expire on December 31, 2023. The House Intelligence Committee established a bipartisan working group on Section 702 in March 2023,17 but it had not publicly disclosed its recommendations as of August. The White House expressed support for a renewal of the authority.18

The Section 702 protocols intended to limit official access to the communications of US citizens and residents are frequently violated. In the period from December 2021 to November 2022, FBI agents queried the Section 702 database for information about US citizens and residents more than 200,000 times, according to government disclosures.19 In July 2023, the FISA Court released an opinion indicating that officials had improperly searched the Section 702 database using the last names of an unidentified US senator and a state senator in June 2022, and the social security number of a state judge in October 2022.20 In May 2023, the FISA Court released an opinion documenting thousands of improper searches of communications data by the FBI, including noncompliant queries for the communications of people who joined the 2020 racial justice protests, participants in the January 2021 attack on the Capitol, and 19,000 donors to an unidentified congressional campaign.21 Republican congressman Darin LaHood reported in March 2023 that he had been targeted by improper searches of the Section 702 database in 2020 or earlier.22

Previously, in October 2019, the FISA Court released three opinions in which it found that the communications data of tens of thousands of US citizens and residents had been subjected to improper searches by the FBI.23 The court also determined that the FBI had violated the law by not reporting the number of times it conducted “US person queries.”24 A subset of these violations have been linked to the NSA’s collection of communications when they merely mentioned information relating to a foreign surveillance target (referred to as “about” collection), which the agency halted in 2017.25

Under Title I of FISA,26 the Justice Department may obtain a court order to conduct surveillance of Americans or foreigners inside the United States if it can show probable cause to suspect that the target is a foreign power or an agent of a foreign power. In March 2020, the department’s inspector general released a memorandum documenting pervasive errors in previous FISA applications, along with a failure to abide by internal procedures meant to ensure their accuracy.27

Originally issued in 1981, Executive Order (EO) 12333 is the primary authority under which US intelligence agencies gather foreign intelligence; essentially, it governs all such collection that is not governed by FISA, and it includes most collection that takes place overseas. The extent of current NSA practices that are authorized under EO 12333 is unclear and potentially overlaps with other surveillance authorizations.28 Although EO 12333 cannot be used to target a “particular, known” US person, the very fact that bulk collection is permissible under the order ensures that the communications of US citizens and residents will be incidentally collected, and likely in very significant quantities. Moreover, questions linger as to whether the government relies on EO 12333 to conduct any surveillance inside the United States that would not be subject to judicial oversight.29 A letter from two senators that was made public in February 2022 revealed that the Central Intelligence Agency (CIA) secretly conducted bulk data collection, authorized under EO 12333, in a manner that implicated US people and with no congressional oversight. Senators Wyden and Martin Heinrich have called for more transparency regarding the kind of records that are stored and the legal framework under which they were collected.30

In criminal probes, law enforcement authorities can monitor the content of internet communications in real time only if they have obtained an order issued by a judge, under a standard that is somewhat higher than the one established under the constitution for searches of physical places. The order must reflect a finding that there is probable cause to believe a crime has been, is being, or is about to be committed.

Access to metadata for law enforcement, as opposed to intelligence, generally requires a subpoena issued by a prosecutor or investigator without judicial approval.31 Judicial warrants are only required in California under the California Electronic Communications Privacy Act (CalECPA).32

According to one ruling in federal court, law enforcement officials must obtain a judicial warrant to access stored communications.33 However, the 1986 Electronic Communications Privacy Act (ECPA) states that the government can obtain access to email or other documents stored in the cloud with a subpoena, subject to certain conditions.34

Federal authorities claim to have much greater leeway to conduct searches without a warrant in “border zones”—defined as up to 100 miles from any land or sea border, an area encompassing about 200 million residents.35 Under Directive No. 3340-049a of 2018, US Customs and Border Protection (CBP) asserts broad powers to conduct device searches and claims the authority to require travelers to provide their device passwords to CBP personnel.36 Courts remain split on the legality of the searches, however.37 In May 2023, a federal court in New York ruled in favor of a warrant requirement for manual device searches at the border,38 whereas in February 2021, a federal appeals court in Boston had found the practice constitutional.39

CBP reported over 41,000 electronic device searches during the coverage period.40 In September 2022, the Washington Post reported on a letter Senator Wyden sent to CBP that revealed how information collected through these searches—including contact lists, call logs, photos, and messages—is collated into a searchable database called the Automated Targeting System and made accessible to CBP personnel without a warrant.41

There have been concerns about federal, state, and local government agencies’ use of more targeted surveillance tools. To limit the use of spyware in the United States, the Biden administration issued an executive order in March 2023 that bars federal agencies from the “operational” use of commercial spyware products that could be employed by foreign governments to violate human rights or target people from the United States, or otherwise present national security risks.42 Prior to that order, the New York Times reported in December 2022 that the Drug Enforcement Administration had purchased and deployed Graphite, a spyware tool produced by the Israeli company Paragon, in its foreign operations, though it was not clear whether Graphite was used to target Americans.43 In January 2022, a New York Times investigation revealed that the FBI had purchased and tested Pegasus spyware, a notorious surveillance product developed by the Israeli firm NSO Group, though there was no evidence that the tool had been deployed against people in the United States.44

Several government entities, including the Department of Homeland Security (DHS), have purchased extraction technology from companies like the Israeli firm Cellebrite that allow officials to extract information stored on a device or online within seconds.45 An October 2020 report from the nonprofit UpTurn revealed that more than 2,000 state and local law enforcement agencies also had such technology.46 In February 2022, the Intercept reported that all but one of the 15 US cabinet departments had Cellebrite products, including departments and agencies that had little association with intelligence collection, such as the Department of Agriculture and the Centers for Disease Control and Prevention.47

Federal, state, and local law enforcement bodies have access to a range of advanced tools for monitoring social media platforms and sharing the information they collect with other agencies,48 without clear oversight or safeguards for individual rights.49 For example, emails obtained by the Intercept indicate that the US Marshals Service used services from Dataminr, a social media monitoring company, to track protests related to the Dobbs decision from May to July 2022.50 In October 2022, Senator Wyden disclosed that DHS had compiled profiles of people active in Portland, Oregon, protests that included their social media activity, lists of family members and friends, and travel history, despite their posing “no threat to homeland security.”51 Local police have also created fake social media accounts to infiltrate users’ networks and gain access to more personal information.52 In September 2023, the Brennan Center for Justice released DHS documents that pointed to the routine use of fake social media accounts by CBP and Immigration and Customs Enforcement (ICE).53

In 2019, the Department of State enacted a new policy that vastly expanded its collection of social media information.54 It required people applying for a US visa, numbering about 15 million each year, to provide their social media user names, their email addresses, and their phone numbers going back five years.55 In February 2022, DHS and CBP proposed requiring applicants for entry under the Visa Waiver Program to provide their social media handles.56 404 Media reported in August 2023 that CBP had purchased software from a company that purports to sell emotion-recognition technology, for use in screening travelers’ social media accounts.57

Dozens of law enforcement agencies have access to cell-site simulators or IMSI (international mobile device subscriber identity) catchers—commonly known as “stingrays” after a prominent brand name—that mimic mobile network towers and cause nearby phones to send identifying information; the technology enables police to track targeted phones or determine the phone numbers of people in a given area. As of 2018, the American Civil Liberties Union (ACLU) had identified 75 agencies across the country that used such systems.58 Several courts have affirmed that police must obtain a warrant before employing stingray technology.59 In February 2023, the DHS Office of the Inspector General found that the US Secret Service and ICE had failed to follow federal policies, including privacy statutes, when using stingrays.60

In September 2022, the Associated Press reported on the extent to which local police have access to Fog Reveal, a subscription product that collects and analyzes huge amounts of commercially available location data generated by mobile applications.61

C6 1.00-6.00 pts0-6 pts
Does monitoring and collection of user data by service providers and other technology companies infringe on users’ right to privacy? 4.004 6.006

There are few legal constraints on the collection, storage, and transfer of data by private or public actors in the United States. ISPs and content hosts collect vast amounts of information about users’ online activities, communications, and preferences. This information can be subject to government requests for access, typically through a subpoena, court order, search warrant, or national security letter.

In general, the country lacks a comprehensive federal data-protection law that would limit how private companies can use personal information and share it with government authorities, though a number of bills have been proposed.1 The draft American Privacy and Protection Act, introduced in June 2022 but not reintroduced in the current Congress as of early September 2023, would minimize the personal data collected by companies, allow users to opt out of data transfers, and provide the FTC with enforcement power, among other provisions.2 In July 2023, after the coverage period, the House Judiciary Committee approved the bipartisan Fourth Amendment Is Not For Sale Act, which would prohibit law enforcement and intelligence agencies from buying sensitive personal information like geolocation data from private companies and require the agencies to obtain a warrant, among other measures.3 It was unclear when the bill would receive a vote from the full House of Representatives.4

Given the lack of a comprehensive federal law, the FTC in August 2022 announced that it was seeking public comment on whether the agency should institute new regulatory restrictions to limit harmful commercial surveillance.5

Most legislative activity on data privacy has occurred at the state or local level.6 Two California laws, the 2018 California Consumer Privacy Act (CCPA) and the 2020 California Privacy Rights Act (CPRA),7 allow state residents to obtain information from businesses about how their personal data are collected, used, and shared.8 Among other powers granted to them under the CPRA, consumers can request that personal information held by a business be corrected, opt out of automated decision-making technology, and opt out of certain information sharing.9 As of July 2023, six states had active bills on comprehensive data privacy moving through their legislatures, and five states had passed new comprehensive data privacy laws since the beginning of the year.10

Under the USA FREEDOM Act of 2015, companies are permitted to report granular detail on certain types of government requests, under some constraints.11 In 2019, a filing under the Freedom of Information Act (FOIA) revealed that the FBI had used national security letters—a form of secret administrative subpoena that the bureau can issue to demand certain types of communications and financial records—to access personal data from a much broader group of entities than previously understood,12 including Western Union, Bank of America, Equifax, TransUnion, the University of Alabama at Birmingham, Kansas State University, major ISPs, and tech and social media companies.

Separately, the government may request that companies store targeted data for up to 180 days under the 1986 Stored Communications Act (SCA).13

In 2018, the Supreme Court ruled narrowly in Carpenter v. United States that the government is required to obtain a warrant in order to access seven days or more of subscriber location records from mobile service providers.14 The ruling also diminished, in a limited way, the third-party doctrine—the idea that Fourth Amendment privacy protections do not extend to most types of information that are handed over voluntarily to third parties, such as telecommunications companies.15

The scope of law enforcement access to user data held by companies was expanded earlier in 2018 under the Clarifying Lawful Overseas Use of Data (CLOUD) Act.16 The act stipulated that law enforcement requests sent to US companies for user data under the SCA would apply to records in the company’s possession, including overseas. The CLOUD Act also allowed certain foreign governments to enter into bilateral agreements with the United States and then petition US companies to hand over user data,17 bypassing the “mutual legal assistance treaty” (MLAT) process.18 In 2019, the United States and the United Kingdom signed the first Bilateral Data Access Agreement under the CLOUD Act, and in December 2021,19 the United States and Australia entered a similar pact.20

User information is otherwise protected under Section 5 of the Federal Trade Commission Act (FTCA), which has been interpreted to prohibit internet entities from deceiving customers about what types of personal information are being collected from them and how they are used.

Private companies may comply with both legal demands and voluntary requests for user data from the government. A January 2023 report from the ACLU disclosed a database of sensitive financial information that DHS and Arizona officials had obtained from money-order companies.21 In October 2021, Vice News reported on an FBI document that clarified what data service providers collect and store, how the bureau and other law enforcement bodies can obtain location information from the providers without a warrant, and what tools agencies have to analyze the information provided.22 In November 2021, the transparency organization Property of the People released a previously unreported FBI document that showed the extent to which certain messaging platforms—like WhatsApp, Signal, iMessage, and Viber—store user data that can be accessed via warrants or subpoenas.23

Government bodies have purchased phone location data to aid in investigations and law enforcement, sidestepping judicial and other forms of oversight.24 In June 2023, the Office of the Director of National Intelligence declassified a January 2022 report finding that intelligence agencies purchase data on the commercial market, including location data, under “policies that may not accord sufficient protection.”25 In July 2022, the ACLU published thousands of pages of records indicating that DHS agencies including CBP, ICE, the Secret Service, and the Coast Guard had purchased huge volumes of location information pulled from mobile apps.26

The Dobbs decision reignited calls for Congress to pass a privacy law and for companies to limit the data they collect and share with state officials, particularly in states where abortion had been criminalized after Dobbs and such information could be used for prosecutions.27 A Gizmodo investigation identified 32 data brokers selling information from an estimated 2.9 billion profiles of people determined to be pregnant or who searched for maternity products online, as well as 478 million customer profiles categorized as “interested” in becoming or “intended” to become pregnant.28 Vice News similarly reported that the data broker SafeGraph was selling aggregated data, including location information, of people who visited abortion and reproductive health clinics.29 In response to the concerns, the FTC announced that it would to the extent of its legal authority protect Americans against companies that exploit health, location, and other sensitive information.30 In April 2023, the commission and other federal bodies improved safeguards for health data by expanding definitions of “personally identifiable health information,” restricting the use of some marketing technologies for health care, and extending protections for patient records under the 1996 Health Insurance Portability and Accountability Act to cover consumer health data.31

Facebook, complying with a search warrant sent in June 2022, provided Nebraska police with private messages between Celeste Burgess, a 19-year-old, and her mother Jessica Burgess as part of a felony case related to an alleged abortion.32 In July 2023, after the coverage period, Celeste Burgess was sentenced to 90 days in jail; Jessica Burgess faced up to five years in prison and was expected to be sentenced in the fall of 2023.33 The incident prompted renewed calls for the platform to encrypt its messaging services.34

Police issue “geofence” warrants to gain access to information from electronic devices within a given geographic area, raising due process and proportionality concerns. Police in Minneapolis obtained a warrant compelling Google to deliver account data for anyone within a specified location of the city in May 2020, during protests in response to George Floyd’s murder.35 In August 2020, two federal judges in separate opinions ruled that such broad location-based warrants violate the Fourth Amendment.36 The Fourth Circuit Court of Appeals was expected to hear arguments in United States v. Chatrie, the first geofence search case to reach a federal court of appeals, in late 2023.37

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in relation to their online activities? 3.003 5.005

Internet users are generally free from extralegal intimidation or violence by state actors. However, online harassment is a long-standing and growing problem in the United States. Women and members of marginalized racial, ethnic, and religious groups are often singled out for such threats and mistreatment. A 2021 report from the Pew Research Center found that 41 percent of adults in the United States had experienced online harassment, with 33 percent of women under 35 reporting that they had faced sexual harassment online.1

In the periods surrounding both the 2022 midterm elections and the 2020 general elections, people involved with election administration and certification have faced online harassment, due in part to conspiracy theories about their role in supposed fraud schemes (see B5 and B7). A March 2022 poll conducted by the Brennan Center for Justice found that one in six officials had received threats—often via social media—related to their election work, and that election workers were leaving their jobs in growing numbers as a result of safety concerns.2 According to a survey published in early 2022 by the Brennan Center, one in five election officials did not plan to continue serving through 2024, pointing to stress and politicians’ attacks on the system as reasons for leaving.3

For example, throughout late 2022, a Texas resident who followed conspiracy theories about voter fraud in Arizona posted online death threats aimed at Stephen Richer and Tom Liddy, both Maricopa County election officials, and their families.4 In June 2022, Shaye Moss, a former Georgia election worker, testified before the House of Representatives’ January 6 Committee about how violent, racist, and death threats via text and social media had upended her life. The threats began after former president Trump and his lawyer Rudolph Giuliani smeared Moss and her mother as part of a conspiracy theory about fake ballots during the 2020 elections; the smears were then shared by far-right online outlets.5

Scientists and government health officials have faced increased online harassment, including threats of violence, amid the COVID-19 pandemic.6 In December 2022, Anthony Fauci, then outgoing chief medical adviser to the president, disclosed that he and his family had received regular online harassment, including credible death threats, often based on false or misleading information about his work.7

In general, online harassment and threats, including doxing, disproportionately affect women and members of marginalized demographic groups.8 A June 2023 report from the Anti-Defamation League found that 51 percent of transgender people surveyed had experienced online harassment in the past 12 months, the highest of any demographic group surveyed, followed by 47 percent of lesbian, gay, bisexual, or queer people, and 38 percent of Black or Muslim people.9 A 2022 report found that women mayors and mayors of color reported higher rates of abuse and harassment, including online, compared with their male and non-Hispanic White peers.10 For example, a Latina local official interviewed for a May 2023 Princeton University report said that she had been targeted with racialized death threats, including photos of lynchings, on social media.11

New York City councilman Erik Bottcher’s office and home were vandalized in December 2022 after he posted a video on Twitter about confronting anti-LGBT+ protesters.12 In June 2022, a California state senator was targeted with bomb threats after he posted on Twitter to mock a bill that sought to ban drag shows in the presence of minors.13 According to a December 2022 report from the Human Rights Campaign, 24 different medical providers and hospitals experienced targeted online harassment, as well as bomb threats, because they provided gender-affirming care or discussed such services on their websites.14

Online journalists are at times exposed to physical violence or intimidation by police, particularly while covering protests. During a protest in Los Angeles against the Dobbs decision in June 2022, a reporter for the online news site LA Taco was repeatedly shoved by police officers, despite clear identification as a member of the press.15 Beyond isolated cases of violence, US-based journalists have faced growing online harassment. In a PEN America survey conducted between June and October 2021, 58 percent of more than 1,000 journalists and editors reported experiencing one or more forms of harassment, most often online, including via email, trolling, doxing, or “catfishing.”16

The White House’s Task Force to Address Online Harassment, created in June 2022,17 released recommendations in March 2023 that included the establishment of a National Resource Center on Cybercrimes against Individuals, the issuance of grants for law enforcement training on survivor support, and initiatives for increased accountability and research on online harassment and abuse..18

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 1.001 3.003

Cyberattacks pose an ongoing threat to the security of websites and networks in the United States. Civil society groups, journalists, and politicians have also been subjected to targeted technical attacks. The Supreme Court’s Dobbs decision in June 2022 renewed digital security concerns among many Americans and raised the risk of technical attacks on websites associated with abortion.1

Media organizations sometimes experience cyberattacks. In May 2023, the Philadelphia Inquirer briefly closed its newsroom because of a ransomware attack.2 Also in May, the news site Black Star News was taken offline and had articles removed from its website; the outlet indicated that the cyberattack may have been motivated by its reporting.3

Some attacks have been traced to foreign actors. In May 2023, a China-based hacking group gained access to the email accounts of about 25 organizations, including US government agencies, according to a Microsoft investigation. Subsequent disclosures indicated that the Department of Commerce, including the email account of Secretary Gina Raimondo, was among those affected by the breach.4 In June 2023, after the coverage period, the Department of Energy and several other federal agencies were affected by a ransomware campaign in which Russia-based hackers targeted several hundred companies.5

Ransomware and other types of cyberattacks against federal, state, and local government institutions are common. Florida’s Supreme Court was one of many victims in a global ransomware attack in February 2023, affecting servers used in state court administration.6 The same month, the City of Oakland, California, experienced a ransomware attack that led to network outages, the declaration of a state of emergency, and a data dump of over 600 gigabytes of information on the internet, including social security numbers, home addresses, medical data, and other confidential information from current and former city workers and residents who had used City of Oakland websites or filed claims against the city.7 In February 2023, the US Marshals Service discovered a major ransomware attack that exposed sensitive information, including the personal information of its employees and people accused or convicted of crimes.8

On United States

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    83 100 free
  • Internet Freedom Score

    76 100 free
  • Freedom in the World Status

    Free
  • Networks Restricted

    No
  • Websites Blocked

    No
  • Pro-government Commentators

    No
  • Users Arrested

    Yes