United States

Free
75
100
A Obstacles to Access 21 25
B Limits on Content 29 35
C Violations of User Rights 25 40
Last Year's Score & Status
76 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

For the fifth year in a row, internet freedom declined in the United States. The spread of conspiracy theories and manipulated content about the November 2020 elections threatened the core of American democracy, culminating in outgoing president Donald Trump’s incitement of a violent attack on the US Capitol in a bid to halt certification of the election results on January 6, 2021. While the internet in the United States remains vibrant, diverse, and largely free from state censorship, government authorities in multiple cases responded to nationwide protests against racial injustice in 2020 with intrusive surveillance, harassment, and arrests. Separately, a number of new policies and proposed laws signaled a potential shift in approach by the new administration of President Joseph Biden, including his withdrawal of one Trump-era executive order aimed at reducing protections against intermediary liability and a pair of others that would have effectively banned the Chinese-owned platforms WeChat and TikTok.

The United States is a federal republic whose people benefit from a vibrant political system, a strong rule-of-law tradition, robust freedoms of expression and religious belief, and a wide array of other civil liberties. However, in recent years its democratic institutions have suffered erosion, as reflected in partisan pressure on the electoral process, bias and dysfunction in the criminal justice system, flawed and discriminatory policies on immigration and asylum seekers, and growing disparities in wealth, economic opportunity, and political influence.

header2 Key Developments, June 1, 2020 – May 31, 2021

  • Legislators increased funding for broadband connectivity and other internet services in a COVID-19 relief package and a proposed infrastructure bill. The Emergency Broadband Benefit Program, part of the COVID-19 package passed in December 2020, provided nearly four million people with discounts for internet services and related devices (see A1 and A2).
  • Citing free speech concerns, federal judges in September 2020 suspended a pair of August executive orders from President Trump that halted transactions between US individuals and entities and the Chinese-owned social media applications TikTok and WeChat. President Biden rescinded the executive orders in June 2021, after the coverage period, but directed the Department of Commerce to evaluate the potential national security risks associated with apps that are owned, controlled, or managed by “foreign adversaries” (see B2).
  • In May 2021, President Biden revoked a Trump administration executive order entitled “Preventing Online Censorship,” which limited protections against intermediary liability within Section 230 of the Communications Act, as amended by the Telecommunications Act of 1996 (see B3).
  • A surge in false, conspiracist, misleading, and incendiary content surrounding the November 2020 elections contributed directly to the violent attack on the Capitol on January 6, 2021. In response to President Trump’s incitement of his supporters, several social media platforms suspended or permanently banned his accounts (see B3, B5, and B7).
  • Amid nationwide protests against racial injustice during the summer of 2020, enhanced government surveillance—as well as intimidation, harassment, and arrests linked to online activity—infringed on people’s freedom to use digital technology to associate and assemble (see B8, C3, C5, and C7).
  • In May and June 2021, new disclosures revealed that the Department of Justice (DOJ) under President Trump secretly obtained the phone records of journalists, politicians, and government staff members as part of its investigations into leaks of government information, sparking broad public backlash. The DOJ later announced that it would no longer secretly collect journalists’ records (see C6).
  • In one of the largest and most sophisticated hacks in recent years, the technology company SolarWinds was subjected to a cyberattack attributed to the Russian government. The attackers infiltrated the systems of government agencies, private companies, think tanks, and civil society organizations. The Biden administration responded with sanctions against the Russian government in April 2021 (see C8).

A Obstacles to Access

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

The United States has the third-largest number of internet users in the world,1 but penetration rates and broadband connection speeds are lower than in other economically developed countries.

With respect to broadband access per 100 people, the United States is ranked 18th out of 37 countries, trailing global leaders Switzerland, France, Denmark, the Netherlands, and Norway. Several reports identify different penetration rates. For example, in April 2021, the Pew Research Center estimated that 93 percent of US adults use the internet,2 with 85 percent owning a smartphone.3 Similarly, the International Telecommunication Union found that 88.5 percent of the population used the internet in 2019.4 According to the Federal Communications Commission (FCC), the nation’s communications regulator, 94 percent of US residents have access to fixed-line or mobile broadband services.5 However, numerous sources have documented how the FCC’s methodology leads to a significantly exaggerated figure.6 Poorly measured and inaccurate broadband deployment figures can undermine efforts to end disparities in access (see A2).7

The speed-testing company Ookla reported the average US mobile internet download speed to be 88.06 Mbps in August 2021, ranking it 18th worldwide. It found the average fixed broadband download speed to be 195.45 Mbps, or the 12th fastest worldwide.8

Infrastructural shortcomings continue to affect broadband networks, and severe weather disrupted internet access throughout the coverage period. For example, in February 2021, an extreme winter storm in Texas left many users with limited internet service,9 while a 2020 tropical storm temporarily cut off the internet for more than a million people in New Jersey.10 High temperatures in California in August 2020 resulted in an overtaxed energy grid that forced officials to cut power across the state.11

Congressional leaders put forward several plans to modernize the nation’s telecommunications networks during the year, including as components of a comprehensive plan to address climate change.12 In April 2021, President Biden proposed a multibillion-dollar investment in internet infrastructure as part of a sweeping infrastructure bill.13 A modified version of the proposal, which allocated $65 billion to reduce disparities in access, passed the Senate in August, after the coverage period.14

Fifth-generation (5G) mobile networks are available to an estimated 60 percent of the population, with coverage skewed toward urban areas.15 Some observers argue that the United States has “fallen behind in the competition for leadership of the 5G transition.”16 Several US policymakers have raised concerns about the national security and internet freedom implications of China’s position as a global leader in 5G development.17 In June 2020, the FCC designated two China-based telecommunications companies and 5G technology providers—Huawei and ZTE—as national security threats.18

Between December 2019 and March 2020, the FCC completed the large-scale Auction 103 for 5G millimeter-wave spectrum,19 which continues to be instrumental for 5G service. The FCC concluded Auction 107 in 2021, with bidders spending $80.9 billion for the additional spectrum to improve their 5G networks.20

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 2.002 3.003

Older members of the population, those with less education, households with lower socioeconomic status, and people living in rural areas or on tribal lands tend to experience the most significant barriers to access.1 High costs and limited provider options also impede internet access (see A4).2 Younger adults, as well as people of color and those with lower household incomes, are especially prone to being dependent on smartphones for their internet access.3

The cost of broadband internet access in the United States exceeds that in many countries with similar penetration rates, creating an “affordability crisis,” according to New America’s Open Technology Institute.4 In 2021, the United States ranked 131 out of 211 countries for the average cost of a fixed-line broadband package per month.5 In May 2021, the advocacy group Free Press released a report concluding that the average US household’s internet service expenditures grew by 19 percent from 2016 to 2019, an increase that outpaced the rate of inflation during that same period.6

Tribal communities are among the least connected in the country,7 with an estimated 18 percent of this population having no internet access at home.8 The FCC calculates that more than 32 percent of tribal-land residents in the continental United States do not have high-speed fixed terrestrial or mobile service.9 Broadband expansion rates lag in these communities compared with other rural areas.10

Expanding broadband to rural parts of the country is a long-standing policy problem. Numerous grant and subsidy programs have resulted in slow progress, however.11

Older residents use the internet at lower rates than the rest of the population. In 2021, researchers found that almost 22 million (42 percent) of US seniors did not have access to broadband at home.12

Broadband access, and particularly affordability, remains a priority for lawmakers. In March 2021, members of Congress introduced the Accessible, Affordable Internet for All Act, a $94 billion proposal aimed at making the internet more accessible and affordable across the country.13

According to the National Telecommunications and Information Administration (NTIA), 57 different federal programs fund or support industry, state, local, and community broadband needs.14 Notable among them is the FCC’s Lifeline program, which allows companies to offer subsidized phone plans to low-income households.15 As of June 2021, approximately 9.6 million people relied on Lifeline for internet access.16 The FCC, under Chairman Ajit Pai’s leadership from 2017 to early 2021 (see A4), announced administrative adjustments to increase minimum service requirements and combat alleged fraud in the program,17 including new financial disclosure obligations and data-usage monitoring.18 A number of civil society groups objected, citing invasive practices that put Lifeline subscribers’ privacy at risk.19 The FCC estimated that in 2021 only 26 percent of those eligible would participate in Lifeline.20

At the state level, a total of 34 legislatures passed or implemented broadband-related resolutions during their 2020 sessions,21 and at least 36 governors highlighted broadband infrastructure during their 2021 “state of the state” speeches.22 For instance, in April 2021, New York passed a new bill mandating that service providers offer a $15-per-month broadband service option.23

Disparities in access remained acute due to the ongoing COVID-19 pandemic.24 The FCC took steps to reduce the impact of restrictions to the Lifeline program during the pandemic, such as waiving certain recertification requirements for enrollees.25 The Emergency Broadband Benefit Program, created in December 2020 as part of a COVID-19 relief package, provides discounts for internet service and devices for certain qualifying Americans who are undergoing pandemic-related financial hardships.26 By the end of the coverage period, nearly four million people had utilized the benefit, although a majority of eligible households had not signed up.27 President Biden’s proposed infrastructure bill contained provisions that would extend the Emergency Broadband Benefit Program (see A1).28

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

The US government imposes minimal restrictions on the public’s ability to access the internet. Private telecommunications companies, including AT&T and Verizon, own and maintain the backbone infrastructure. A government-imposed disruption of service would be highly unlikely and difficult to achieve due to the multiplicity of connection points to the global internet.

Federal law enforcement agencies have previously limited wireless internet connectivity in emergency situations. In 2011, San Francisco’s Bay Area Rapid Transit (BART) authorities limited mobile internet and telephone service on its platforms ahead of planned protests in response to the transit police force’s killing of a homeless man.1 In 2005, when the London subway system was bombed using a device activated by cellular signal, the Port Authority of New York and New Jersey and the Metropolitan Transportation Authority blocked mobile service within four tunnels in New York City for almost two weeks.2

In 2006, a federal task force approved Standard Operation Procedure 303, which codified wireless network restrictions during a “national crisis.”3 Just what constitutes a “national crisis,” and what safeguards exist to prevent abuse, remain largely unknown. In 2014, the FCC issued an enforcement advisory to clarify that it is illegal for state and local law enforcement agencies to jam mobile networks without federal authorization.4

A bipartisan group of federal legislators introduced the Unplug the Internet Kill Switch Act and the Preventing Unwarranted Communications Shutdowns Act in September and October 2020, respectively.5 Although they had not been reintroduced during the 2021–22 congressional session as of May 2021, the bills would have limited the president’s power to restrict telecommunications services under Section 706 of the Communications Act of 1934 (as amended).6

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 4.004 6.006

The broadband industry in the United States has trended toward concentration. Many consumers—estimated at over 83 million people1 —have only one broadband provider in their area, and these de facto local monopolies have exacerbated concerns about high cost and accessibility.2 Some critics directly attribute deficiencies in the country’s internet infrastructure to insufficient competition.3

From 2019 to 2020, the number of subscribers to the nation’s largest fixed-line broadband internet providers grew by about 4.86 million, the largest annual increase since 2008.4 Comcast leads the market with approximately 30.6 million subscribers overall. The second-ranked provider, Charter Communications, has more than 28.8 million subscribers. Far behind in third place is Cox, with an estimated 5.38 million subscribers. The Institute for Local Self-Reliance reported that “Comcast and Charter maintain an absolute monopoly over at least 47 million people.”5

Further consolidation of the telecommunications sector threatens to limit consumer access to information and communication technology (ICT) services and content. In February 2019, the US Court of Appeals for the District of Columbia Circuit upheld the mobile service provider AT&T’s acquisition of the media and entertainment company Time Warner,6 despite the DOJ’s challenge to the merger.7 Less than a year later, reports of financial problems at AT&T surfaced, with customers facing price increases.8

The FCC has attempted to address concerns about reduced competition and limited consumer access in recent merger approvals. The commission included provisions within a 2016 Charter–Time Warner Cable deal that required Charter to expand broadband availability, including by establishing new cable lines in poorly served areas and providing affordable access to low-income families.9 Other conditions prohibited the companies from privileging their cable television services over online video competitors.10 In 2015, regulators had blocked a proposed merger between Time Warner Cable and Comcast, citing concerns about Comcast’s ability to interfere with over-the-top streaming services (such as Netflix) as well as increased market concentration.11

Onerous regulations limit the potential of municipal or publicly owned broadband suppliers to challenge the market’s consolidation, deliver higher-quality and more affordable internet service, and reach underserved communities.12 The Institute for Local Self-Reliance identified 19 states with restrictive legislation that impedes the development of community broadband and another five states with other legal, regulatory, or economic barriers to the establishment of municipal networks.13 However, bills intended to eliminate such restrictions were introduced over the past year in Arkansas, Idaho, Tennessee, Washington, and Montana. For example, the Arkansas bill, which passed in January 2021, permits cities and counties to create their own broadband infrastructure.14

Following a decade of consolidation, three national providers—AT&T, Verizon, and T-Mobile—now dominate the wireless market. At the start of the reporting period, Verizon led the group with approximately 119 million subscribers, followed by T-Mobile with 98.3 million and AT&T with 93 million.15

The FCC and the DOJ approved a merger between Sprint and T-Mobile in May 2019 and July 2019, respectively,16 even though regulators had signaled their disapproval in 2011 and 2014,17 and despite legal challenges to the merger from state attorneys general.18 Antitrust experts have criticized the deal and called for it to be reversed.19

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 3.003 4.004

The FCC is tasked with regulating radio and television broadcasting, interstate communications, and international telecommunications that originate or terminate in the United States. It is formally an independent regulatory body, but critics on both sides of the political spectrum argue that it has become increasingly politicized in recent years.1

The agency is led by five commissioners nominated by the president and confirmed by the Senate, with no more than three commissioners from one party. Ajit Pai, who was nominated by President Trump, served as chair until January 2021.2 By the end of the coverage period, President Biden had yet to nominate a new FCC head, with Jessica Rosenworcel, a commissioner who was originally nominated by former president Barack Obama, serving as acting chair.3

Other government agencies, such as the Department of Commerce’s NTIA, play advisory or executive roles on telecommunications, economic, and technology policies and related regulations. The US Department of Agriculture is also an important source of funding for broadband initiatives and wields significant influence on policy.4

Under Pai’s leadership, the FCC took steps toward deregulating the telecommunications industry.5 In March 2017, the commission froze broadband privacy guidelines that had been adopted the previous October.6 The rules would have required broadband providers to obtain opt-in consent from consumers before using and sharing their personal information.7 In February 2017, the FCC ended its review of zero-rating practices—which provide free internet access to consumers under certain limited conditions—as part of its movement away from net neutrality principles.8 Critics argue that the perpetuation of zero-rating services, while modestly expanding internet access, has the potential to harm consumers by stifling market competition and limiting the diversity of online content available to some users.9 Other observers suggest that the FCC’s failure to create vigorous standards and enforce policies aggressively, for instance by raising the definition of broadband to encompass higher speeds or penalizing noncompliant companies, contributes to long-standing disparities in access.10

In December 2017, the FCC repealed its 2015 Open Internet Order, often referred to as the net neutrality rule, weakening its regulatory authority over internet service providers (ISPs).11 The repeal decision, known as the Restoring Internet Freedom Order,12 effectively allowed ISPs to speed up, slow down, or restrict the traffic of selected websites or services at will. Civil society and public interest groups argued that these changes disadvantaged consumers in various ways,13 and that the FCC had abandoned its responsibility to protect a free and open internet.14

Since 2018, numerous state legislatures, attorneys general, and civil society groups have taken up efforts to restore net neutrality (see B6).15 In October 2019, a federal appeals court upheld the FCC’s repeal of the Open Internet Order,16 although it ruled that the commission cannot preemptively block states from instituting their own laws intended to safeguard net neutrality. Several states, including California,17 Oregon,18 Vermont,19 Washington,20 Colorado,21 Maine,22 and New Jersey,23 have enacted net neutrality laws, and 20 other states, the District of Columbia, and Puerto Rico introduced legislation during 2020.24 The governors of Montana and New York have signed executive orders barring state agencies from conducting business with ISPs that violate net neutrality.25

Proponents of net neutrality are guardedly optimistic about the potential revival of protections under President Biden. The acting FFC chair staunchly backs the principle;26 Senator Edward Markey, a Democrat from Massachusetts, has signaled his intent to introduce legislation that would “restore net neutrality protections”;27 and Columbia University law professor Tim Wu—progenitor of the term net neutrality—is now a member of the National Economic Council, a White House advisory body.28 In July 2021, after the coverage period, President Biden signed an executive order that, among other things, contained several measures meant to strengthen policies related to net neutrality.29

B Limits on Content

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content, particularly material that is protected by international human rights standards? 6.006 6.006

In general, the government does not force ISPs or content hosts to block or filter online material that would be considered protected speech under international human rights law. This includes political speech.

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content, particularly material that is protected by international human rights standards? 3.003 4.004

The government does not directly censor political or social viewpoints online, although legal rules do restrict certain types of content. Intermediaries can face copyright liability if they do not honor notice-and-takedown provisions of the Digital Millennium Copyright Act (DMCA). They also run the risk of criminal liability for failing to remove content such as child sexual abuse material (CSAM) after becoming aware of it. Section 230 of the Communications Act, as amended by the Telecommunications Act of 1996—commonly known as Section 230 of the Communications Decency Act—is a long-standing rule meant to protect freedom of speech online that remained a subject of debate among policymakers during the coverage period (see B3). Broadly speaking, content hosts and social media platforms are the primary decision-makers when it comes to the provision, retention, or moderation of online content, assuming the content is not prohibited under existing legal guidelines (see B3).

In August 2020, the Trump administration issued two executive orders that would have effectively banned the Chinese-owned communication platforms WeChat and TikTok on the grounds that they presented threats to national security.1 The Department of Commerce subsequently announced that the two services would be removed from app stores in the United States in late September 2020.2 However, a federal court in California blocked the WeChat ban, citing free speech concerns.3 Similarly, a federal judge in Washington, DC, granted an injunction against the TikTok prohibition before it took effect,4 and a second federal judge blocked the same order in December 2020.5 In June 2021, after the coverage period, President Biden revoked the two executive orders issued under Trump and replaced them with a new order directing the Department of Commerce to evaluate the potential national security risks associated with apps that are owned, controlled, or managed by “foreign adversaries.”6

Government officials have appealed to social media companies to remove, restore, or moderate specific content. In June 2020, then acting secretary of homeland security Chad Wolf called on social media platforms to more aggressively monitor content connected with national protests against racial injustice.7 In July 2020, responding to a civil rights audit,8 Senator Michael Bennet and then senator Kamala Harris, both Democrats, wrote to Facebook and requested that the company bolster its “efforts to protect civil rights, remove hate speech, and combat voter suppression.”9 In January 2021, following the attack on the US Capitol, three senators and four state attorneys general requested that Facebook stop carrying advertisements for military gear.10

Although there is no evidence that direct government coercion on users to remove online content is systematic or widespread, users have occasionally experienced such pressure. In June 2021, a police chief in Pennsylvania summoned a Facebook user to the local police station to discuss posts in which the man had criticized the department.11 The police chief threatened the man with spurious felony charges if the posts were not taken down. In March 2020, a police officer in Wisconsin threatened to charge a teenager and her family with disorderly conduct and take them to jail unless she deleted Instagram posts about her COVID-19 infection.12

People in the United States have also occasionally had their content restricted based on requests from foreign governments. In one prominent case, the New York Times reported in June 2020 that the video-conferencing platform Zoom, acting on a request from the Chinese government, temporarily suspended the account of a US-based Chinese activist who planned to host a meeting to commemorate the deadly 1989 crackdown on prodemocracy protests in Beijing’s Tiananmen Square.13

Section 230 of the Communications Decency Act shields online providers and content hosts from legal liability for most material created by users, including lawsuits alleging defamation or injurious falsehoods.14 However, there are exceptions to this immunity under federal criminal law, intellectual-property law, laws to combat sex trafficking, and laws protecting the privacy of electronic communications. Section 230 also ensures legal immunity for social media companies and other content providers that act in good faith to remove content when it violates their terms and conditions of service or their community guidelines.15

The Allow States and Victims to Fight Online Sex Trafficking Act, also referred to as SESTA/FOSTA, was signed in April 2018. The law established new liability for internet services when they are used to promote or facilitate the prostitution of another person.16 While the law’s laudable goal was to aid victims of sex trafficking, plaintiffs including advocates for sex workers’ rights and the Internet Archive have challenged it, claiming that it violates the federal constitution’s First Amendment protections for free speech. In 2019, an appellate court permitted the case to go forward.17 After the bill passed in the Senate, but before it became law, reports emerged of companies preemptively censoring content: Craigslist announced that it was removing the “personals” section from its website altogether.18 Civil society activists criticized the law for motivating companies to engage in excessive censorship in order to avoid legal action.19 Sex workers and their advocates also argued that the law threatened their safety, since the affected platforms enabled sex workers to leave exploitive situations and operate independently, communicate with one another, and build protective communities.20 In December 2019, members of Congress introduced the SAFE SEX Workers Study Act to evaluate the impact of SESTA/FOSTA on the health and safety of sex workers in the country.21 In February 2021, the bill’s sponsor, Representative Ro Khanna, a Democrat from California, signaled plans to reintroduce the study legislation.22

Section 512 of the DMCA, enacted in 1998, created new immunity from copyright claims for online service providers. However, the law’s notice-and-takedown requirements have been criticized for impinging on speech rights,23 as they may incentivize platforms to remove potentially unlawful content without meaningful judicial oversight. Early research on the DMCA found that notice-and-takedown procedures were sometimes used “to stifle criticism, commentary, and fair use.”24 In other instances, overly broad or fraudulent DMCA claims resulted in the removal of content that should be excused under provisions for free expression, fair use, or education.25 DMCA complaints have also been used as a vehicle for taking down political campaign advertisements.26 In April 2021, reporters covering the video-game industry accused the company Activision of using the DMCA to remove articles on social media that discussed leaked information about a forthcoming game in the Call of Duty series.27 That same month, the Electronic Frontier Foundation filed a lawsuit alleging that Proctorio, an exam-proctoring software company, “exploited the DMCA” to remove a student’s criticism of the company’s product from Twitter.28

In December 2020, Congress passed two pieces of copyright legislation—the Copyright Alternative in Small-Claims Enforcement Act (CASE Act) and the Protecting Lawful Streaming Act.29 A broad coalition including civil society groups, industry experts, and educators criticized the CASE Act in particular for creating an unaccountable decision-making body, increasing liability for users, and stifling innovation, among other concerns.30

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 4.004 4.004

The government places relatively few restrictions on online content, and existing laws do not allow for broad government blocking of websites or removal of content. However, companies that host user-generated content, many of which are headquartered in the United States, have faced criticism in recent years for a lack of transparency and consistency when it comes to enforcing their own rules on content moderation.

Section 230 of the Communications Decency Act generally shields online sites and services from legal liability for the activities of their users, allowing user-generated content to flourish on a variety of platforms (see B2).1 Despite robust legal and cultural support for freedom of speech within the United States, the scope of Section 230 has become a focus of criticism. Concerns about CSAM, defamation, cyberbullying and cyberstalking, terrorist content, and protection of children from harmful or indecent material all contribute to the desire for reform of the platforms’ legal immunity for user-generated content, as do complaints that platforms are “over-moderating” certain political viewpoints.

Efforts to reform Section 230 beyond the 2018 SESTA/FOSTA legislation continue to gain momentum (see B2). One of the most prominent attempts to alter Section 230 came in May 2020, when President Trump signed an executive order, “Preventing Online Censorship,” that aimed to limit platform protections against intermediary liability.2 The order referred to accusations that social media platforms show “political bias” by deliberately censoring conservative views,3 despite scant evidence to support such claims. In 2019, Facebook released an inconclusive report on political bias after commissioning an audit on the issue by a former Republican senator in collaboration with a private law firm.4 In February 2021, researchers at New York University conducted another study, concluding that “the claim of anti-conservative animus is itself a form of disinformation: a falsehood with no reliable evidence to support it.”5

A coalition of civil society members, academics, and tech companies criticized Trump’s executive order as harmful to online speech and lacking in legal standing.6 Several groups also sued the Trump administration.7 In May 2021, the Biden administration rescinded the order.8

Numerous other Section 230 reforms have been proposed, with at least nine bills introduced in Congress since January 2021.9 Several of the bills—such as the Stopping Big Tech’s Censorship Act,10 the Ending Support for Internet Censorship Act,11 and the Online Freedom and Viewpoint Diversity Act12 —are premised in part on claims of political bias against conservative viewpoints.

The bipartisan Platform Accountability and Consumer Transparency (PACT) Act,13 initially introduced in 2020 and refiled in 2021 with a few changes, would require online platforms to provide expanded explanations of their content moderation practices and force them to adhere to court-mandated takedown orders.14 While the bill received recognition from some as a “serious” attempt to address content moderation concerns, civil society groups, industry representatives, and scholars have raised First Amendment concerns, warned that the legislation’s takedown provision could be used for censorship, and noted the compliance burden that would be placed on smaller platforms.15

Lawmakers in several states also proposed bills aimed at regulating social media companies’ content moderation, including Texas, Kentucky, Arizona, and North Dakota.16 In March 2021, Utah passed a law requiring that mobile devices automatically filter out pornography.17 However, the rule will not go into effect unless five other states implement similar measures, and it will sunset in 2031 in the absence of such companion legislation.

In May 2021, Florida’s governor signed a first-of-its-kind law that would prevent social media companies from suspending the accounts of political candidates for more than 14 days, but would still permit temporary bans and removal of posts that violate platform policies.18 Platforms that suspend candidates running for statewide office in violation of the law would face fines of up to $250,000 per day; similar violations involving candidates for local office would draw fines of up to $25,000 per day. The law also allows lawsuits against services if the plaintiff alleges that the platform is inconsistent in its content moderation. In June, citing First Amendment concerns, a federal court issued a preliminary injunction against enforcement of the law.19

The Children’s Internet Protection Act (CIPA) of 2000 requires public libraries receiving certain federal government subsidies to install filtering software that prevents users from accessing CSAM or other visual materials deemed obscene or harmful to minors. Libraries that do not receive the specified subsidies are not obliged to comply with CIPA. In April 2021, Vice News reported that two public school districts in Virginia utilized a filtering service, in compliance with CIPA, that blocked health and other resources for LGBT+ teenagers.20

Companies have successfully argued that moderation decisions are an exercise of their own constitutionally protected right to set platform policies, allowing them to remove content and accounts that violate their rules. Twitter permanently banned President Trump’s account following the attack on the US Capitol on January 6, 2021, noting the “risk of further incitement of violence” (see B5 and B7).21 Facebook also indefinitely suspended Trump from Facebook and Instagram, with chief executive Mark Zuckerberg explaining that “the risks of allowing the president to continue to use our service during this period are simply too great.”22 The suspensions came after both Twitter and Facebook applied warning labels to posts by Trump that contained baseless claims about mail-in ballots and voter fraud,23 as well as those that violated platform rules against spreading COVID-19 misinformation,24 among other topics. A number of additional social media sites, including Reddit, Twitch, YouTube, Snapchat, and Discord, also banned or temporarily restricted Trump’s accounts.25

The Facebook Oversight Board—a structurally independent entity composed of global experts who review Facebook’s content moderation decisions and assess whether they align with company policies, its values, and international human rights norms26 —upheld Facebook’s decision to suspend Trump from the platform in May 2021. However, the board instructed the company to revisit the matter within six months, arguing that there were no clear standards for an indefinite suspension.27 In response, Facebook specified that Trump’s ban would last two years and that he would only be allowed back if “the risk to public safety has receded.”28

Facebook, Twitter, and YouTube have all faced criticism for insufficient transparency regarding the enforcement of their respective community standards or terms of service, as well as for the effects of this enforcement on marginalized populations.29 An independent civil rights audit of Facebook, released in July 2020, raised concerns about hate speech, algorithmic and discriminatory bias, and content moderation policies.30 One outlet reported in April 2021 that Google had blocked advertisers from seeking out racial justice or Black Lives Matter videos to place ads, while allowing them to select YouTube videos and channels related to White supremacist and other hateful search terms.31 In April 2021, an academic audit concluded that Facebook’s advertising system discriminated against women by not showing them certain job ads due to their gender.32 In August 2019, a group of LGBT+ content creators filed a lawsuit against YouTube on First Amendment and civil rights grounds, alleging that the company unevenly and disproportionately regulated and suppressed LGBT+ content.33

In May 2021, the Biden administration joined the Christchurch Call, an agreement between tech companies and numerous national governments to combat terrorist content online.34 The pledge had been organized in 2019 after a White supremacist gunman live-streamed his attacks on mosques in Christchurch, New Zealand.35 The Trump administration had opted not to sign on due to free speech concerns.36

Companies that serve as providers of internet infrastructure also enforce their own discretionary speech policies. Apple, Amazon, and Google removed the social media platform Parler from their app stores and hosting services because of violent content on the app in relation to the January 2021 attack on the Capitol (see B5 and B7).37 In August 2019, Cloudflare had dropped services for 8chan, an online forum that hosted manifestos written by the perpetrators of mass shootings.38

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Reports of self-censorship among journalists, commentators, and ordinary internet users are not pervasive in the United States. Women and members of marginalized communities are frequent targets of online harassment and abuse, which can induce self-censorship (see C7), but it remains unclear precisely how often these and other pressures may lead users to self-censor in practice.

Social media users may change their behavior in line with their perceptions of government surveillance. A 2016 study in Journalism & Mass Communication Quarterly found that priming participants with subtle reminders about mass surveillance had a chilling effect on their willingness to publicly express dissenting opinions online.1 Another study from October 2018 reaffirmed the impact of online surveillance on self-censorship.2

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 2.002 4.004

False, manipulated, and misleading information is disseminated by both foreign and domestic entities in the United States. While disinformation is propagated by actors from across the political spectrum,1 multiple academic studies, civil society reports, and real-world events have demonstrated that the tactic is disproportionately utilized by those on the right wing.2 Online disinformation directly threatened the country’s democratic stability following the November 2020 elections and contributed to the assault on the US Capitol in January 2021 (see B7).

According to the nonpartisan Election Integrity Partnership, generally misleading or false claims around the November vote sought to undermine public confidence and eventually transformed into a single, larger meta-narrative about a “stolen election.” This meta-narrative then fed into the #StopTheSteal campaign—a phrase initially used in 2016 by prominent Trump ally Roger Stone.3 Researchers found that false electoral narratives were primarily spread by four categories of social media accounts: right-wing influencers who had verified, blue-check accounts; hyperpartisan and fringe media outlets; right-leaning mainstream media outlets; and political figures, including President Trump and his family members.4 Once shared, the content rapidly filtered down to ordinary users. Misleading and false information also had a bottom-up spread, meaning one-off incidents or stories that used misleading framing were posted by individual users and then quickly picked up and exaggerated or manipulated by influencers, media personalities, and political actors with larger followings to feed the overarching narrative of electoral fraud.5

All of the 21 most prominent Twitter accounts identified by the Election Integrity Partnership as spreaders of false or misleading information were associated with right-wing or conservative views. The accounts of President Trump and his sons Eric Trump and Donald Trump Jr. were listed in the top 10, while others belonged to Fox News host Sean Hannity, radio personality Mark Levin, Breitbart News, Gateway Pundit, and Charlie Kirk, who founded the pro-Trump youth organization Turning Point USA. Eight of the 12 accounts that were most engaged with false and misleading narratives on Facebook and Instagram were connected to right-wing actors, including Breitbart News as well as Project Veritas and its founder James O’Keefe. Three left-leaning pages—those run by NowThis Politics, StandwithMueller, and historian Heather Cox Richardson—were also included in the top 12 for the two platforms, although the Election Integrity Partnership noted that these accounts were referring to false information in an effort to fact-check or counteract it. Several individuals and fringe news sites that led the spread of disinformation on Twitter and Facebook were also found to be doing so on YouTube and Instagram. In a few cases, left-leaning celebrities amplified misinformation on Twitter suggesting that the US Postal Service was destroying collection boxes as part of voter suppression efforts.6

Actors spreading electoral disinformation took advantage of each platform’s unique features to ensure maximal amplification. For example, the #StopTheSteal group amassed over 320,000 members in only 22 hours, making it one of the fastest-growing groups ever on Facebook.7

The Election Integrity Partnership concluded that the surge in baseless allegations of electoral fraud online helped to propel the insurrection on January 6 (see B7). Following the Capitol attack, political figures and both mainstream and fringe media sites continued to propagate false or misleading information, including some claims that left-wing extremists were responsible for the violence.8

The findings of the Election Integrity Partnership echoed previous disinformation research. After analyzing over 55,000 online news stories, five million Twitter posts, and 75,000 posts on public Facebook pages from March 1 to August 31, 2020, Harvard University researchers concluded that disinformation about mail-in voting and election fraud was driven and reinforced by President Trump, the Republican Party, and right-wing news outlets.9

False and misleading information also spread around the protests against racial injustice in the summer of 2020. For example, the Digital Forensic Research Lab concluded that a campaign alleging the involvement of antifa—a left-wing antifascist movement—in the protests was marked by “largely spurious, decontextualized, or provably false” information.10

Political actors also spread manipulated information about the COVID-19 pandemic. In September 2020, a Cornell University study of 38 million English-language articles about the pandemic found that Trump was “the single largest driver of misinformation” on COVID-19.11 A report by the Center for Countering Digital Hate found that 65 percent of antivaccine content across Facebook and Twitter between February 1 and March 16, 2021, was attributed to only 12 users with large followings, including osteopathic physician Joseph Mercola and longtime antivaccine advocate Robert F. Kennedy Jr.12 People associated with #StopTheSteal also used that movement to promote antivaccine misinformation.13

Foreign actors orchestrated disinformation campaigns related to the racial justice protests, the November elections, and the COVID-19 pandemic.14 In September 2020, Facebook and Twitter announced that the Kremlin-backed Internet Research Agency was running a network of fake accounts and a website purporting to be a left-wing news outlet that employed American journalists.15 In August 2020, Kremlin-backed outlets RT and Ruptly were also reported to have spread a story on Twitter alleging that racial justice protesters had burned Bibles and the US flag.16

The Office of the Director of National Intelligence (ODNI) concluded that Russian president Vladimir Putin had authorized a range of Russian government entities to conduct an online influence campaign with the goal of exacerbating societal divisions in the United States, undermining Biden’s candidacy, and weakening public confidence in the electoral system ahead of the 2020 balloting.17 Some efforts were directly aimed at US officials and prominent individuals, including those tied to Trump and his administration. The report also found that the Iranian military and intelligence services, at the direction of supreme leader Ali Khamenei, carried out influence campaigns to sow division, exacerbate societal tensions, and undermine Trump’s electoral prospects, although they did not directly support Biden’s campaign.

In relation to COVID-19, US officials reported that Russian intelligence agencies were creating and amplifying online content to subvert public confidence in vaccines created by US companies.18 The research firm Graphika also found that a pro-Beijing network of accounts across YouTube, Facebook, and Twitter was working to discredit vaccines in the United States and support the Chinese government’s pandemic response.19

Online news outlets in the United States are generally free of either formal arrangements or coercive mechanisms compelling them to provide favorable coverage of the government. Yet political and economic factors can sometimes intersect to incentivize a close relationship between a political party and a given news organization.20 The 2020 election cycle featured alignment between actors in the right-wing online media sector and right-wing politicians regarding the spread of false or intentionally misleading information.21

Some domestic news outlets have been found to run covert campaigns of misleading content or disinformation. In August 2019, for example, Facebook restricted the ability of the right-wing news outlet Epoch Times to purchase advertisements on its platform after reporting revealed that the outlet was pushing conspiracist content to a vast number of US users under page names that were not explicitly associated with the media group.22 In December 2019, Facebook again removed hundreds of accounts, pages, and groups linked to the Epoch Media Group that used fake profile photos created with assistance from artificial intelligence.23 The accounts shared political information on topics such as religion, President Trump’s impeachment, and right-wing ideology. The media group remained active during the latest coverage period,24 including by spreading Chinese-language misinformation about the November 2020 elections, according to the Election Integrity Partnership.25

Reports have also alleged that private companies use coordinated teams of commentators to spread information for their commercial gain. In March 2021, the Intercept reported that Amazon created a group of employees called “ambassadors” to defend the company and its founder Jeff Bezos from criticism on social media, particularly regarding employees’ efforts to unionize.26 In May 2021, New York’s Office of the Attorney General concluded that major service providers, via the Broadband for America group, spent $4.2 million to oppose net neutrality protections in 2017, including by generating 80 percent of the 22 million public comments sent to the FCC.27

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

There are no government-imposed economic or regulatory constraints on internet users’ ability to publish content. Online outlets and blogs generally do not need to register with, or have favorable connections to, the government to operate. Media sites can accept advertising from both domestic and foreign sources.

Experts argue that the FCC’s 2017 repeal of the 2015 Open Internet Order could result in new constraints for those wishing to publish online (see A5).1 Democratic Party lawmakers have pushed to enshrine net neutrality principles in federal legislation, an action that may be more likely under the Biden administration (see A2).

In February and June 2020, the Department of State designated nine Chinese state media companies as “foreign missions,” requiring them to report information on staffing and real-estate holdings and limiting the number of employees they can post in the United States.2 In previous years, several Chinese and Russian state media outlets had been designated as “foreign agents,” a status with other transparency requirements attached.3 In September 2020, the DOJ declared Al-Jazeera Media Network to be an “agent of the Government of Qatar,” requiring its US-based social media division to register as a foreign agent.4 Neither the “foreign agent” nor “foreign mission” designations entail any direct restrictions on an outlet’s content or ability to publish online.

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity and reliability? 3.003 4.004

Score Change: The score declined from 4 to 3 because the online landscape was saturated with election-related disinformation that contributed to a violent attack on the US Capitol on January 6, 2021.

As a whole, the online environment in the United States is dynamic and diverse. Users can easily find and publish content on a range of issues, covering a variety of communities, and in multiple languages. However, an upswell of disinformation, hyperpartisan speech, and conspiracist content continues to threaten the information ecosystem, weakening trust in traditional media institutions and eroding the visibility and readership of more credible sources.1

The integrity and reliability of the information space was undermined by the rapid spread of disinformation surrounding the November 2020 elections (see B5). Several reports drew the connection between the insurrection on January 6, 2021, and the proliferation of false, misleading, and incendiary content online.2 For example, the Election Integrity Partnership concluded that electoral disinformation drove people to participate in the attack.3 BuzzFeed also reported on an internal Facebook memo finding that #StopTheSteal and similar groups helped create a highly influential movement that delegitimized the elections, promoted violence, and ultimately incited the Capitol attack.4 Following the violence, the information space continued to be unreliable. A February 2021 poll from Suffolk University and USA Today showed that 58 percent of Trump voters referred to the incident as a “mostly antifa-inspired attack that only involved a few Trump supporters.”5

Misinformation and conspiracy theories about COVID-19 have also reduced the reliability of the information space. According to Harvard University research from November 2020, some 29 percent of US adults believed that the reported number of COVID-19 deaths was inflated. The study said 28 percent of respondents believed that anti-Trump political groups in particular had exaggerated the pandemic, and 27 percent agreed that the virus was purposefully created and released.6

COVID-19 misinformation has led to offline harms or changed behavior. Academic research conducted in September 2020 found that misinformation lowered the intent of US residents participating in the study to get vaccinated by 6.4 percent.7 According to doctors, some COVID-19 patients who accepted false claims that the virus was a hoax failed to take their illnesses seriously and sought medical care too late.8

The rise of conspiracist content online in the United States is a multiyear trend.9 For example, QAnon—an online conspiracist movement alleging that key Democrats and other elites are part of an international cabal of pedophiles and that Trump is a heroic leader against the forces of evil—has grown in popularity in recent years.10 The movement has contributed to offline harms: in one case in August 2020, a man in Boston live-streamed as he fled from police with his children in the vehicle, calling for help from QAnon and accusing his wife and daughter of being part of the cabal.11

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 5.005 6.006

There are no technical or legal restrictions on individuals’ use of digital tools to organize or mobilize for civic activism. However, growing surveillance of social media and communication platforms, targeted harassment and threats, and high costs and other barriers to access have undermined people’s ability to engage in such activism.

Throughout the coverage period, members of the public frequently took to social media to organize protests against racial injustice and to provide support for the Black Lives Matter movement after the police killings of Black civilians Breonna Taylor in Kentucky and George Floyd in Minnesota in the spring of 2020.1 However, federal, state, and local law enforcement agencies increased their social media surveillance amid the protests (see C5).2 Reporting in June 2020 revealed that agents from a Federal Bureau of Investigation (FBI) terrorism task force appeared at homes or workplaces to question four people in Cookeville, Tennessee, who were involved in planning Black Lives Matter rallies on Facebook.3 In North Carolina, the FBI questioned a man and his mother two days after he jokingly posted on Twitter that he was a local leader of antifa.4

Reports also suggest that law enforcement personnel gained access to protesters’ private communications after their electronic devices were confiscated in Ohio.5 In Portland, Oregon, an internal Department of Homeland Security (DHS) document obtained by journalists showed that the department had accessed protesters’ electronic messages, including from encrypted platforms, and then disseminated such information to federal, state, and local agencies.6 The information reportedly focused largely on discussions among demonstrators about how to avoid being arrested or about police violence during protests. Moreover, according to a New York Times report in October 2020, lawmakers on the House of Representatives’ Intelligence Committee said that DHS officers had considered extracting data from protesters’ phones in Portland.7

Surveillance coupled with targeted harassment has sometimes chilled activists’ willingness to use digital tools to associate and assemble. For example, citing concerns that online information would be used by hostile nonstate actors to disrupt or exploit planned gatherings, some Minneapolis residents instituted self-imposed restrictions on live streaming and sharing of information on social media.8 A photographer and activist in Philadelphia stopped posting to social media amid the protests in June 2020, citing the need to protect demonstrators from police retaliation.9

Platforms have also restricted content related to digital organizing. In June 2020, the civil society group Color of Change said it had collected hundreds of reports within a few weeks that Facebook had restricted or removed Black Lives Matter and antiracist content.10

Despite strong constitutional protections for the freedom to assemble, the International Center for Not-for-Profit Law tracked 98 different federal and state initiatives aimed at restricting the right from June 2020 to April 2021, with at least one legislative proposal in Mississippi (SB 2374) that was defined broadly enough to apply to online activity.11

C Violations of User Rights

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 6.006 6.006

The First Amendment of the federal constitution includes protections for free speech and freedom of the press. The Supreme Court has long maintained that online speech has the highest level of constitutional protection.1 In a 2017 decision, the court reaffirmed this position, arguing that to limit a citizen’s access to social media “is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”2 Lower courts have consistently struck down government attempts to regulate online content, with some exceptions for specified illegal material (see B3).

In June 2021, after the coverage period, the Supreme Court ruled in favor of a high school student who was suspended after posting, while not on school grounds, an image on the Snapchat platform that used vulgarities to express frustration with her school and its cheerleading squad.3 The nearly unanimous decision found that the student’s speech was protected under the First Amendment, but the justices did acknowledge some leeway for schools to regulate speech when it is genuinely disruptive in order to deal with bullying and related issues.4

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities, particularly those that are protected under international human rights standards? 2.002 4.004

Despite significant constitutional safeguards, laws such as the Computer Fraud and Abuse Act (CFAA) of 1986 have sometimes been used to prosecute online activity and impose harsh punishments. Certain states have criminal defamation laws in place, with penalties ranging from fines to imprisonment.1

Instances of aggressive prosecution under the CFAA have fueled criticism of the law’s scope and application. It prohibits accessing a computer without authorization, but fails to define the terms “access” or “without authorization,” leaving the provision open to interpretation in the courts.2

In one prominent case from 2011, programmer and internet activist Aaron Swartz secretly used Massachusetts Institute of Technology servers to download millions of files from JSTOR, a service providing academic articles. Prosecutors charged Swartz harshly under the CFAA, which could have resulted in up to 35 years’ imprisonment.3 Swartz died by suicide in 2013 before his trial. After his death, lawmakers introduced “Aaron’s Law,” which would prevent the government from using the CFAA to prosecute terms-of-service violations and stop prosecutors from bringing multiple, redundant charges for a single crime.4 Until recently, reform efforts were largely unsuccessful.5 In April 2020, however, a court narrowed the scope of the CFAA by ruling in favor of researchers who were concerned that their work, which involved scraping data from websites, ran afoul of the law.6

In June 2021, after the coverage period, the Supreme Court further limited the application of the CFAA and clarified the meaning of unauthorized access.7 The case, Van Buren v. United States, involved the conviction of a police officer who had accessed police databases for unofficial purposes.8 An amicus brief filed by several nongovernmental organizations (NGOs) argued for a narrow interpretation of the CFAA, asserting that a lower court’s decision would have broadened the law’s scope and turned it into “an all-purpose mechanism for policing objectionable or simply undesirable behavior.”9

All 50 states have laws that pertain to an array of illegal computer activities such as unauthorized access, hacking, denial of service attacks, and phishing.10

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities, particularly those that are protected under international human rights standards? 4.004 6.006

Prosecutions or detentions for online activities are neither frequent nor systematic. However, the coverage period featured a number of cases in which people were arrested, charged, or threatened with criminal charges due to their actions online.

Internet users and online journalists have been investigated, arrested, and charged in connection with the protests against racial injustice that began in May 2020 and extended throughout the coverage period.1 The following were among the more widely reported cases:

  • In June 2020, five people in New Jersey were charged with online harassment, a felony, for a Twitter post seeking to identify a masked police officer. One man was charged for publishing the post, while the other four had simply shared it.2
  • Also in June 2020, a Cincinnati Enquirer reporter was detained by police while covering protests after curfew, despite the fact that journalists qualified as essential workers and were exempt from the restriction.3
  • While reporting for Delaware Online the same month, journalist Jeff Neiburg was detained and forced onto a police bus for transportation out of the area surrounding Philadelphia’s city hall.4
  • Journalist Gustavo Martínez Contreras was assaulted and arrested by police in June 2020 while live-streaming a George Floyd protest in Asbury Park, New Jersey. Charges were later dropped.5
  • In September 2020, social media journalist Chris Katami was arrested while reporting on racial justice protests in Portland, Oregon.6

Smaller protests also led to the arrest of online journalists. As law enforcement officers in Los Angeles were attempting to clear out an encampment of people who are unhoused in March 2021,7 an estimated 13 reporters covering associated protests were detained, including representatives of the digital outlet LA Taco;8 independent photojournalist Ashley Balderrama, who was live-streaming the events on Instagram, was also among those detained.9 During protests in response to the fatal police shooting of Black civilian Daunte Wright in Minnesota in April 2021,10 numerous journalists were arrested or detained,11 including Naasir Akailvi and Tracy Gunapalan of the social media news site the Neighborhood Reporter.12

Some journalists were arrested while covering 2020 election-related protests and the subsequent attack on the Capitol. In November 2020, videographer Vishal Singh was arrested in Los Angeles while live-streaming such protests.13 Two Washington Post reporters working for the paper’s live online news program were temporarily detained as they covered the unrest at the Capitol on January 6, 2021.14

In January 2021, user Joshua Andrew Garton of Tennessee was detained for two weeks and charged with harassment for posting a doctored image on social media that depicted two police officers urinating on a gravestone.15 The charges were later dropped, and Garton sued local and state officials for violating his First Amendment rights.

Police have periodically detained or retaliated against individuals for using their mobile devices to upload images or stream live video of law enforcement activity; most face charges such as obstruction or resisting arrest.16 In 2017, federal courts upheld the right of bystanders to use their smartphones to record police actions.17

At times, politicians have attempted to use legal cases to identify anonymous critics on the internet. In May 2021, Glenn Davis, a state lawmaker and candidate for lieutenant governor, filed a defamation lawsuit against the sender of a derogatory text message.18 Davis sought both to reveal the identity of the sender and to collect $450,000 in damages. In March 2019, Representative Devin Nunes, a Republican from California, sued Twitter and the users of three anonymous accounts, alleging defamation and seeking $250 million in damages;19 a Virginia judge overseeing the case ruled in June 2020 that Twitter was immune from liability under Section 230 of the Communications Decency Act, though the individual users were not protected by this ruling.20 The New York Times revealed in May 2021 that the DOJ under President Trump had issued a subpoena to Twitter in a bid to identify the user behind one of the anonymous accounts.21

In January 2021, the DOJ charged Douglass Mackey, a far-right social media figure, with election interference after he mounted a disinformation and voter-suppression campaign during the 2016 election period. A lawyer at the National Association for the Advancement of Colored People (NAACP) Legal Defense Fund stated that the case could be the first in which the DOJ has brought criminal charges under civil rights laws for sharing election disinformation over social media.22

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 3.003 4.004

There are no laws restricting anonymity on the internet, a situation that is seemingly in line with constitutional protections for the right to anonymous speech in many other contexts. At least one state law that stipulates journalists’ right to withhold the identities of anonymous sources has been found to apply to bloggers.1

Online anonymity has been challenged in cases involving hate speech, defamation, and libel. In 2015, a Virginia court tried to compel the customer-review platform Yelp to reveal the identities of anonymous users, but the Supreme Court of Virginia ruled that the company did not have the authority to do so.2 In May 2019, a court ruled that Reddit did not need to reveal the identity of one of its users to a plaintiff who was suing for copyright infringement.3

No legal limitations apply to the use of encryption, but both the executive and legislative branches have at times moved to weaken the technology.4 In October 2020, the DOJ issued a joint statement with governments from the United Kingdom, Australia, New Zealand, Canada, India, and Japan, calling on Facebook and other tech companies to help enable government access to encrypted messages.5 In the letter, the governments argued that encryption tools create “severe risks to public safety.”6 In June 2019, Politico reported that the Trump administration was considering a legal ban on any encryption technology that would not allow law enforcement access.7

In its original form, the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act that was introduced in the Senate in 2020, as well as its companion bill in the House,8 would have paved the way for a weakening of encryption.9 Although the draft was amended to “exclude encryption” from the bill’s reforms to Section 230,10 advocates and some legal scholars still argued that it could set the stage for restrictions on encryption.11 In May 2020, in response to the perceived shortcomings of the draft EARN IT Act, a group of Democratic senators introduced the Invest in Child Safety Act of 2020, an alternative bill that would maintain existing encryption standards while addressing online child exploitation.12 At the end of the coverage period, the EARN IT Act had not been reintroduced in the 2021–22 Congress, while the Invest in Child Safety Act was reintroduced in April 2021.

The proposed Lawful Access to Encrypted Data (LAED) Act, introduced in the Senate in June 2020 and in the House in July,13 would require the creation of a back door to encryption systems, so that both device manufacturers and service providers could decrypt devices and information at the request of law enforcement agencies.14 Civil society groups, technical experts, and cybersecurity advocates strongly oppose the proposed legislation.15 The bill was not reintroduced in the 2021–22 Congress by the end of the coverage period.

The degree to which courts can force technology companies to alter their products to enable government access under existing law is unclear. The Communications Assistance for Law Enforcement Act (CALEA) of 1994 currently requires telephone companies, broadband providers, and interconnected Voice over Internet Protocol (VoIP) providers to design their systems so that communications can be easily intercepted when government agencies have legal authority to do so, although it does not cover online communication tools such as Gmail, Skype, and Facebook.16

Following a terrorist attack in San Bernardino, California, in 2015, the federal government obtained a court order requiring Apple to help unlock the smartphone of one of the alleged perpetrators.17 Apple resisted, and the case was dropped after the FBI gained access by other means. According to a report from the DOJ’s inspector general, the FBI had paused its own search for a technical solution in order to set up a public legal confrontation with Apple.18 In a separate case, a federal judge ruled in 2016 that CALEA did not allow the government to compel Apple to unlock an iPhone.19 Calls to update CALEA to cover online applications have not been successful. In December 2019, the DOJ asked Apple to unlock two phones used by a gunman who attacked a Navy facility in Florida.20 Although the company chose not to comply, the government announced in May 2020 that it had broken into the phone. In an earlier case in 2013, a judge issued a warrant authorizing the government to seize the secure email provider Lavabit’s private SSL encryption keys and, when the company did not comply, fined it $5,000 a day until it acquiesced.21 The government was ultimately seeking to access the emails of former National Security Agency (NSA) contractor Edward Snowden, who had leaked extensive information on US government surveillance practices.22

The broader legal questions around encryption remain unresolved, and some have called for explicit protections for the technology.23 In June 2018, a bipartisan group of federal lawmakers proposed legislation to block state and local governments from requiring back doors in tech products and services, albeit without success.24 The bill, known as the ENCRYPT Act, was reintroduced in May 2021.25

In June 2021, after the coverage period, the DOJ announced that the FBI had intercepted over 20 million messages on the encrypted platform Anom, which was specifically designed to attract transnational criminal organizations, as part of an elaborate sting operation. The bureau worked with the Australian government and an informant to covertly operate the platform,26 which rerouted messages to an undisclosed country for decrypting.27 Some surveillance law experts have suggested that the FBI worked with an additional country because surveillance of this kind would be unlawful in the United States.28

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

The legal framework for government surveillance in the United States is open to abuse, and authorities have engaged in certain forms of monitoring, particularly on social media, with minimal oversight or transparency.

Federal, state, and local law enforcement bodies have access to a range of tools to monitor social media platforms and share the information they collect with other agencies; such monitoring is often directed at those involved in protests, at times justified by authorities as necessary to prevent or investigate violence.1 During the nationwide demonstrations against racial injustice in 2020,2 DHS and other agencies reportedly accessed and analyzed demonstrators’ private communications through processes that appeared to lack judicial oversight (see B8).3 The Drug Enforcement Administration (DEA) gained new permission in June 2020 to “conduct covert surveillance” on protesters.4 In July 2020, the Intercept reported that certain police agencies received intelligence packages filled with content and data pulled from Twitter by the private company Dataminr.5 Reports have also confirmed that federal agencies or local police departments across the country, including in Pittsburgh, Minneapolis, Los Angeles, and Washington, DC, monitored and collected information from posts, comments, live streams, images, and videos that were shared on Facebook, Instagram, Twitter, and other platforms.6

In November 2020, the Intercept reported that the Austin Regional Intelligence Center, a law enforcement fusion center in Texas, surveilled not only protests on social media but also social gatherings and Black cultural events, including an online Juneteenth celebration.7 The center then developed documents that included organizers’ names or social media information, notable guests, and attendance totals, which were shared with other local, state, and federal law enforcement bodies. In September 2021, the Brennan Center for Justice revealed, via a public records request, that Los Angeles Police Department officers were authorized to collect social media information from any civilian they interviewed.8

In April 2021, Yahoo News reported that the US Postal Service was operating an Internet Covert Operations Program that monitored social media for “inflammatory” content and then shared information across agencies.9 For example, the program monitored Telegram, Parler, Facebook, and other sites ahead of the World Wide Rally for Freedom and Democracy—meant to protest COVID-19 prevention measures—in March 2021. Following the attack on the Capitol on January 6, the DHS announced a new strategy to analyze social media to assess security threats.10

In May 2019, the Department of State enacted a new policy that vastly expanded its collection of social media information.11 It required people applying for a US visa, about 15 million each year, to provide social media details, email addresses, and phone numbers going back five years.12 By the end of the coverage period, the Biden administration had not reversed the policy, though it had initiated a review.13 The administration indicated in May 2021 that it would defend the requirement in response to a lawsuit filed by the Brennan Center, the Knight First Amendment Institute at Columbia University, and the law firm Simpson Thacher & Bartlett LLP on behalf of two nonprofit documentary filmmaker organizations.14 However, in April 2021, the White House’s Office of Information and Regulatory Affairs rejected a DHS request to further expand its social media monitoring of people entering the country, arguing that the department had not proved the information to be helpful.15

The government’s search and seizure powers are generally limited by the constitution’s Fourth Amendment, but federal authorities claim to have much greater leeway to conduct searches without a warrant within “border zones”—defined as up to 100 miles from any border, an area encompassing about 200 million residents. During the 2020 fiscal year, US Customs and Border Protection (CBP) reported 32,038 electronic device searches.16 By the end of March 2021, CBP reported 17,855 device searches, a 21 percent decrease from the same period a year earlier, which may be due to COVID-19 travel restrictions. The 2018 Directive No. 3340-049a provides CBP with broad powers to conduct device searches and requires travelers to provide their device passwords to CBP agents.17 CBP is known to have technology from the Israeli company Cellebrite that allows agents to extract information stored on a device or online within seconds.18 This information can then be stored in interagency databases that aggregate data from other monitoring programs.19

Courts remain split on the legality of warrantless device searches at the border.20 In February 2021, a federal appellate court in Boston found the practice constitutional,21 reversing a 2019 district court decision that reasonable suspicion was necessary to justify a search.22 In June 2021, the Supreme Court denied a petition from the American Civil Liberties Union and the Electronic Frontier Foundation to review the appeals court’s decision.23 A federal appeals court in San Francisco had significantly narrowed CBP’s ability to conduct warrantless searches in 2019, limiting it to cases that relate to digital contraband, but the ruling is only binding within that court’s jurisdiction, which includes California and eight other states.24

In January 2021, an immigration lawyer in Texas reported that CBP had confiscated and searched his phone without a warrant when he returned from a trip abroad.25 The Financial Times reported in September 2020 that several Chinese students were pressured to hand over their electronic devices to CBP agents when leaving the United States.26

The legal framework for foreign intelligence surveillance has in practice permitted the collection of data on US citizens and residents. Such surveillance is governed in part by the USA PATRIOT Act, which was passed following the terrorist attacks of September 11, 2001, and expanded official surveillance and investigative powers.27 In 2015, then president Obama signed the USA FREEDOM Act, which extended expiring provisions of the PATRIOT Act, including broad authority for intelligence officials to obtain warrants for roving wiretaps of unnamed “John Doe” targets and surveillance of lone individuals with no evident connection to terrorist groups or foreign powers.28 At the same time, the new legislation was meant to end the government’s bulk collection of domestic call detail records (CDRs)—the metadata associated with telephone interactions—under Section 215 of the 2001 law. The bulk collection program, detailed in documents leaked by Edward Snowden in 2013,29 was ruled illegal by the US Second Circuit Court of Appeals in 2015.30

The USA FREEDOM Act replaced the domestic bulk collection program with a system that allows the NSA to access US call records held by phone companies after obtaining an order from the Foreign Intelligence Surveillance Court, also called the FISA Court in reference to the 1978 Foreign Intelligence Surveillance Act.31 Requests for such access require use of a “specific selection term” (SST) representing an “individual, account, or personal device,”32 a mechanism intended to prevent broad requests for records based on a zip code or other imprecise indicators. The definitions of SSTs vary, however, depending on the authority used, and civil liberties advocates have criticized them as excessively broad.33

Another component of the USA FREEDOM Act established a panel of amici curiae with expertise in “privacy and civil liberties, intelligence collection, communications technology, or any other area that may lend legal or technical expertise” to the FISA Court, so that the judges are not forced to rely on the arguments of the government alone in weighing requests. The court must appoint an amicus in any case that “presents a novel or significant interpretation of the law.” However, the court can waive this requirement by issuing “a finding that such appointment is not appropriate.”34 Five people are currently designated to serve as amici curiae.35 In April 2021, the American Civil Liberties Union and several other parties petitioned the Supreme Court to evaluate whether the public has the right to know about the workings of the FISA Court.36

Although reforms to Section 215 of the PATRIOT Act were supposed to end bulk collection of CDRs, official statistics showed that a massive number were still being acquired.37 In April 2019, the NSA recommended that the White House not seek reauthorization of the program because its operational complexities and legal liabilities outweighed the value of the intelligence gained.38 The Trump administration nevertheless asked Congress to permanently reauthorize the CDR program, even as government watchdogs maintained that the CDR program authority was “highly invasive,” lacked evidence of efficacy in protecting the country from security threats,39 and was technically dysfunctional.40

Section 215 ultimately expired in March 2020, after the Senate declined to take up a House-passed reauthorization bill.41 However, a “savings clause” allowed officials to continue using the authority for investigations that had begun before the expiration, or for new examinations of incidents that occurred before that date.42

The Senate passed the draft USA FREEDOM Reauthorization Act in May 2020,43 with an amendment to strengthen the role of amici curiae by giving them greater access to information, granting them new authority to bring matters to the FISA Court, and adding to the categories of cases in which there should be a presumption that amici curiae will participate.44 The House, however, canceled a floor vote on the Senate-passed bill,45 and no further developments occurred during the coverage period.

Other components of the US legal framework allow surveillance by intelligence agencies, but often without adequate oversight, specificity, and transparency. Section 702, adopted in 2008 as part of the FISA Amendments Act, authorizes the NSA, acting inside the United States, to collect the communications of any foreigner overseas as long as a significant purpose of the collection is to obtain “foreign intelligence,” a term broadly defined to include any information that “relates to … the conduct of the foreign affairs of the United States.”46 Section 702 surveillance involves both “downstream” (also known as PRISM) collection, in which stored communications data—including content—is obtained from US technology companies, and “upstream” collection, in which the NSA collects users’ communications as they are in transit over the internet backbone.47 Although Section 702 only authorizes the collection of information pertaining to foreign citizens outside the United States, Americans’ communications are inevitably swept up in this process in large amounts, and these too are stored in a searchable database.48

In 2016, the government notified a FISA Court judge of widespread violations of protocols intended to limit NSA analysts’ access to Americans’ communications.49 The report showed that analysts had failed to take steps to ensure that they were not improperly searching the upstream database when conducting certain types of queries. In response, the court delayed reauthorizing the program, and in 2017 the NSA director recommended that the agency halt its collection of communications if they merely mentioned information relating to a surveillance target (referred to as “about” collection), and instead only collect communications to and from the target.50

Section 702 was reauthorized for six years in January 2018 with few changes.51 The renewal legislation did not prohibit “about” collection, meaning the NSA could legally attempt to resume the practice as long as it obtained the FISA Court’s approval and gave Congress advance notice. The final bill did contain a narrow provision requiring a warrant when FBI agents seek to review the content of communications belonging to an American who is already the subject of a criminal investigation.52 The reauthorization also included measures to increase transparency, such as requiring that the attorney general brief members of Congress on how the government uses information collected under Section 702 in official proceedings such as criminal prosecutions.53

In October 2019, the FISA Court released three opinions in which it found that tens of thousands of Americans had been subject to improper searches by the FBI.54 The court also determined that the FBI had violated the law by not reporting the number of times it conducted “US person queries.”55

In April 2021, the intelligence community released its annual Statistical Transparency Report, which details the frequency with which the government uses certain national security powers.56 The number of Section 702 surveillance targets declined slightly from 204,968 in 2019 to 202,723 in 2020.57 The 2020 Statistical Transparency Report contained evidence of six instances in 2018 in which the FBI reviewed the contents of Americans’ communications after conducting a search in a criminal, non–national security case, but failed to obtain a warrant as required by law.58

Under Title I of FISA,59 the DOJ may obtain a court order to conduct surveillance of Americans or foreigners inside the United States if it can show probable cause to suspect that the target is a foreign power or an agent of a foreign power. In March 2020, the department’s inspector general released a memorandum documenting pervasive errors in previous FISA applications, along with a failure to abide by internal procedures meant to ensure their accuracy.60

Originally issued in 1981, Executive Order (EO) 12333 is the primary authority under which US intelligence agencies gather foreign intelligence; essentially, it governs all collection that is not governed by FISA, and it includes most collection that takes place overseas. The extent of current NSA practices that are authorized under EO 12333 is unclear and potentially overlaps with other surveillance authorizations.61 Although EO 12333 cannot be used to target a “particular, known” US person, the very fact that bulk collection is permissible under the order ensures that Americans’ communications will be incidentally collected, and likely in very significant numbers. Moreover, questions linger as to whether the government relies on EO 12333 to conduct any surveillance inside the United States that would not be subject to judicial oversight.62

In criminal probes, law enforcement authorities can monitor the content of internet communications in real time only if they have obtained an order issued by a judge, under a standard that is somewhat higher than the one established under the constitution for searches of physical places. The order must reflect a finding that there is probable cause to believe a crime has been, is being, or is about to be committed.

Access to metadata for law enforcement, as opposed to intelligence, generally requires a subpoena issued by a prosecutor or investigator without judicial approval.63 Judicial warrants are only required in California, whose California Electronic Communications Privacy Act (CalECPA) has often been described as one of the nation’s best privacy laws since going into effect in 2016.64

According to one ruling in federal court, the government must obtain a judicial warrant to access stored communications.65 However, the 1986 Electronic Communications Privacy Act (ECPA) states that the government can obtain access to email or other documents stored in the cloud with a subpoena, subject to certain conditions.66 Legislative attempts to further protect the privacy of email or other digital communications remain unproductive to date.67 In December 2020, a bipartisan group of lawmakers reintroduced the Email Privacy Act, which would require law enforcement bodies to show probable cause in court before accessing a person’s email.68

Several government agencies have purchased extraction technology, including the DHS.69 An October 2020 report from the nonprofit UpTurn revealed that more than 2,000 state and local law enforcement agencies had the technology.70 School districts in Texas and California also reportedly have access to Cellebrite and other mobile-device forensic tools, which have been used on students’ phones.71

Several law enforcement agencies have access to cell-site simulators or IMSI (international mobile device subscriber identity) catchers—commonly known as “stingrays” after a prominent brand name—that mimic mobile network towers and cause nearby phones to send identifying information; the technology enables police to track targeted phones or determine the phone numbers of people in a given area. In a 2016 decision, a Maryland court rejected the argument that individuals using mobile phones are effectively “volunteering” their private information for use by third parties.72 Several courts have affirmed that police must obtain a warrant before using stingray technology.73 As of November 2018, the American Civil Liberties Union had identified 75 agencies across the country that use stingrays.74 In May 2020, the organization revealed that between 2017 and 2019, US Immigration and Customs Enforcement (ICE) had used stingray or similar devices at least 466 times.75 In November 2020, the California activist group Oakland Privacy won a lawsuit against the City of Vallejo and forced it to hold public hearings on a recently approved measure to purchase the devices.76 These hearings resulted in several privacy-enhancing changes, including new oversight mechanisms and limits on the monitoring of First Amendment–related activities and the sharing of data with immigration authorities.77

C6 1.00-6.00 pts0-6 pts
Does monitoring and collection of user data by service providers and other technology companies infringe on users’ right to privacy? 4.004 6.006

There are few legal constraints on the collection, storage, and transfer of data by private or public actors in the United States. ISPs and content hosts collect vast amounts of information about users’ online activities, communications, and preferences. This information can be subject to government requests for access, typically through a subpoena, court order, or search warrant.

In general, the country lacks a robust federal data-protection law, though a number of bills have been proposed.1 In March 2021, Representative Suzan DelBene, a Democrat from Washington, proposed the Information Transparency and Personal Data Control Act, which would preempt many current state-level privacy laws and mandate that companies disclose whether personal consumer data are shared.2 In May 2021, senators introduced the draft Social Media Privacy Protection and Consumer Rights Act, which would allow users to opt out of data tracking and collection.3

Most legislative activity on data privacy has occurred at the state or local level.4 In 2020, at least 30 states and Puerto Rico considered privacy proposals, including both revisions to existing law and new policies.5 The California Consumer Privacy Act (CCPA), adopted in 2018,6 allows Californians to obtain information from businesses in the state about how their personal data are collected, used, and shared.7 In November 2020, California passed the California Privacy Rights Act (CPRA), augmenting and expanding the CCPA.8 Among other powers under the CPRA, consumers will be able to request that personal information held by a business be corrected, opt out of automated decision-making technology, and opt out of certain information sharing.9 In March 2021, Virginia became the second state to pass its own data privacy measures by adopting the Consumer Data Protection Act,10 and Colorado followed suit in July 2021 with the Colorado Privacy Act.11

The USA FREEDOM Act of 2015 changed the way private companies publicly report on certain types of government requests for user information. Prior to the law, the DOJ restricted the disclosure of information about national security letters (secret administrative subpoenas used by the FBI to demand certain types of communications and financial records), including within the transparency reports voluntarily published by some internet companies and service providers.12 In 2014, the department had reached a settlement with Facebook, Google, LinkedIn, Microsoft, and Yahoo that permitted the companies to disclose the approximate number of government requests they receive, using aggregated bands of 250 or 1,000 rather than precise figures.13 The USA FREEDOM Act granted companies the option of more granular reporting, though reports containing more detail are still subject to time delays, and their frequency is limited.14 Separately, the government may request that companies store targeted data for up to 180 days under the 1986 Stored Communications Act (SCA).15

In September 2019, a request under the Freedom of Information Act (FOIA) revealed that the FBI had been accessing personal data through national security letters from a much broader group of entities than previously understood.16 Western Union, Bank of America, Equifax, TransUnion, the University of Alabama at Birmingham, Kansas State University, major ISPs, and tech and social media companies had all received such letters.

In June 2018, the Supreme Court ruled narrowly in Carpenter v. United States that the government is required to obtain a warrant in order to access seven days or more of subscriber location records from mobile providers.17 The ruling also diminished, in a limited way, the third-party doctrine—the idea that Fourth Amendment privacy protections do not extend to most types of information that are handed over voluntarily to third parties, such as telecommunications companies.18

The scope of law enforcement access to user data held by companies was previously expanded under the Clarifying Lawful Overseas Use of Data (CLOUD) Act,19 signed into law in March 2018.20 The act stipulated that law enforcement requests sent to US companies for user data under the SCA would apply to records in the company’s possession regardless of storage location, including overseas. Requests before the law had been limited to user data stored within the jurisdiction of the United States. The CLOUD Act also allows certain foreign governments to enter into an executive agreement with the United States and then petition US companies to hand over user data,21 bypassing the “mutual legal assistance treaty” (MLAT) process.22 In 2019, the United States and the United Kingdom signed the first bilateral data access agreement under the act.23 A coalition of civil society groups expressed concern about the deal,24 and the law more broadly.25 In April 2020, the United States entered into talks with Australia regarding a similar pact.26

Private companies may comply with both legal demands and voluntary requests for user data from the government. In March 2019, the DOJ confirmed that a DEA program had collected billions of phone records from AT&T without a court order.27 Information and communication platforms may also monitor the communications of their users for the purpose of identifying unlawful content to share with law enforcement (see B2). Minneapolis police obtained a warrant compelling Google to deliver account data for anyone within a specified location of the city in May 2020 during social unrest after George Floyd was killed.28 In August 2020, two judges in separate opinions ruled that such broad location-based “geofence” warrants violate the Fourth Amendment.29

In May and June 2021, new disclosures revealed that the DOJ under President Trump had secretly obtained the phone records of several Washington Post, Cable News Network (CNN), and New York Times reporters as part of investigations into leaks of classified information.30 Also targeted were members of Congress, staff members, and their families,31 and former White House counsel Don McGahn and his wife.32 In response to these disclosures, the DOJ in June announced that it would no longer secretly collect journalists’ records,33 and Senator Ron Wyden, a Democrat from Oregon, introduced the Protect Reporters from Excessive State Suppression (PRESS) Act, which would create new federal protections for reporters’ phone and email records.34

User information is otherwise protected under Section 5 of the Federal Trade Commission Act (FTCA), which has been interpreted to prohibit internet entities from deceiving users about what types of personal information are being collected from them and how they are used. Laws in 47 states and the District of Columbia also require entities that collect personal information to notify consumers—and, usually, consumer protection agencies—when they suffer a security breach that exposes such information.

Government bodies have reportedly purchased phone location data to aid in investigations and law enforcement.35 In 2020, the Internal Revenue Service (IRS),36 ICE,37 and the Secret Service all reportedly engaged in the practice.38 Vice News reported in November 2020 that US military agencies tasked with counterterrorism initiatives had contracted a third-party data broker to provide personal information from a popular Muslim prayer and Quran application.39 In January 2021, the New York Times reported that the Defense Intelligence Agency had also been purchasing commercial databases of user location data.40 In April 2021, 20 senators introduced the Fourth Amendment Is Not For Sale Act to address the issue.41 The bill would prohibit law enforcement and intelligence agencies from buying sensitive personal information like geolocation data from private companies.

Lawmakers and federal agencies scrutinized the data collection practices of major technology platforms during the coverage period. For example, in December 2020, the Federal Trade Commission ordered nine companies, including Facebook, Amazon, YouTube, and Twitter, to disclose information about how they gather and use personal data.42

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in relation to their online activities? 3.003 5.005

Internet users generally are not subject to extralegal intimidation or violence by state actors. However, online journalists are at times exposed to physical violence or intimidation by police, particularly while covering protests. Women and members of marginalized racial, ethnic, and religious groups are often singled out for threats and harassment by other users online. A 2021 report from the Pew Research Center found that 41 percent of adults in the United States have experienced online harassment, with 33 percent of women under 35 reporting that they have faced sexual harassment online.1

During the 2020 presidential election period, election officials and their families were subject to online harassment and threats in relation to their work.2 A website and associated social media accounts that were created in December 2020 accused US election officials of treason and posted incendiary photographs and their addresses.3 QAnon followers harassed a contractor for Dominion Voting Systems and his family online and alleged that voting data were tampered with.4 The wife and sister of Minnesota secretary of state Steve Simon were identified and harassed on social media because of Simon’s public role as an election administrator.5 Arizona secretary of state Katie Hobbs faced threats of violence and attacks on social media.6 Steve Trout, Oregon’s state election director, also reported being harassed via phone and on social media by people who accused him of fraud.7

Numerous online journalists were physically assaulted by police while covering racial justice protests in 2020, despite making it clear that they were members of the press.8 Others were assaulted by civilians at the protests. In one case in October 2020, independent video journalist Hiram Gilberto Garcia was assaulted and had equipment damaged while he was live-streaming a demonstration in Austin, Texas.9

The online harassment, threats, and at times physical attacks associated with the 2020 protests against racial injustice were not isolated incidents. Researcher Dragana Kaurin interviewed people who had recorded and shared high-profile videos of violent arrests and police killings of Black Americans—including Freddie Gray, Eric Garner, Walter Scott, Philando Castile, and Alton Sterling—over several years. Kaurin documented numerous reports of police retaliation, harassment, physical violence, doxing, and other forms of intimidation aimed at deterring community members from sharing evidence of police brutality.10

Instances of retaliation against the press also occurred during the January 6, 2021, attack on the US Capitol. Journalist Vincent Jolly had his phone stolen and destroyed by a participant in the attack.11 Two Vice News reporters were also assaulted and had equipment damaged while recording the events.12

Former president Trump has directly contributed to online harassment and intimidation,13 and those who spoke out against his administration were often targeted for harassment by his supporters.14 An analysis of Trump’s Twitter account by the US Press Freedom Tracker found nearly 2,000 posts from 2015 to April 2020 that used inflammatory language toward news outlets and individual journalists.15 In May 2020, after Twitter fact-checked his posts about mail-in voting, Trump singled out a company employee in another post. The employee then received a barrage of harassing messages, including from members of the president’s reelection campaign.16 In response to clashes between protesters and police during the racial justice demonstrations in the summer of 2020, Trump issued a series of threatening posts, with one including a warning that “when the looting starts, the shooting starts.”17

In general, online harassment and threats, including doxing, disproportionately affect women and other members of marginalized groups.18 In 2021, the Wilson Center studied gendered and sexualized disinformation aimed at women politicians,19 and concluded that such practices are widespread and often incorporate race-based ad hominem attacks. In a 2019 survey of 115 female and gender-nonconforming journalists in the United States and Canada, the Committee to Protect Journalists found that 90 percent of US respondents cited online harassment as the “biggest threat” to safety associated with their jobs.20 A December 2018 Amnesty International study of abuse targeting female journalists and politicians on Twitter found that Black women were 84 percent more likely to be mentioned in abusive posts than White women.21

Harassment of women journalists offline and in broadcast media has inspired online harassment, doxing, and even death threats.22 For instance, after Fox News host Tucker Carlson disparaged NBC reporter Brandy Zadrozny on his show, Zadrozny received such severe and specific threats online that she required armed security for two weeks. Other women journalists have faced online abuse after sharing their experiences with harassment.23

Online harassment against Asian Americans, in particular, grew more prominent during the coverage period. Researchers at the Network Contagion Research Institute found that anti-Asian terms were used on social media 44 percent more often in January 2021 than in the average month of 2020.24 One scholar also documented how anti-Asian rhetoric spiked online after then president Trump used the hashtag #chinesevirus on Twitter.25 In a March 2021 report, the Anti-Defamation League found that more than a third of Asian Americans had experienced online hate and harassment, constituting “the largest single year-over-year rise in severe online harassment in comparison to other groups.”26

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 1.001 3.003

Cyberattacks pose an ongoing threat to the security of networks and databases in the United States. Civil society groups, journalists, and politicians have also been subjected to targeted technical attacks.

Foreign actors have launched cyberattacks aimed at US infrastructure. In one of the largest and most sophisticated attacks in recent years, SolarWinds, a prominent information technology company, was compromised by an extensive infiltration attributed to the Russian government; it was first reported publicly in December 2020.1 The attackers used SolarWinds as a vehicle to penetrate federal government agencies, private-sector networks, think tanks, and civil society organizations, as the company’s software updates were installed by more than 18,000 users.2 The Biden administration responded with sanctions against the Russian government in April 2021.3 In May 2021, hackers suspected of affiliation with the Russian-backed group Darkside carried out a ransomware attack on the Colonial Pipeline, one of the country’s largest conduits for gasoline supplies, disrupting fuel delivery to significant portions of the East Coast.4 In response, President Biden issued an executive order designed to bolster federal cybersecurity networks.5 Also in May, the New York Times reported that a Russian intelligence service had hacked the US Agency for International Development’s email systems.6

Ahead of the US general elections in November 2020, Microsoft announced in September that a hacking unit associated with Russian military intelligence had targeted at least 200 organizations, including national and state political parties and political consultants. Iranian and Chinese hackers also targeted people associated with Trump’s and Biden’s presidential campaigns.7

In December 2020, the US Cybersecurity and Infrastructure Security Agency and the FBI warned US think tanks whose work focused on national security and international affairs that state-sponsored hacking groups were seeking to break into their systems.8 Separately, in March 2021, the federal government revealed information about a cyberattack affecting more than 30,000 public and private entities that was later attributed to the Chinese government.9

In June 2020, the Toronto-based research center Citizen Lab revealed that Dark Basin, a hack-for-hire group, had used phishing and other attacks against US NGOs working on issues related to net neutrality and a climate-change campaign called #ExxonKnew.10 Several journalists from major news outlets similarly faced technical attacks emanating from the group. From late May 2020, after the police killing of George Floyd, through the beginning of June, cyberattacks against advocacy groups increased by 1,120 times, including distributed denial of service (DDoS) attacks against projects meant to raise bail funds for jailed protesters.11

Cyberattacks against state and local governments are increasingly common. According to one analysis, between 2017 and August 2020, cyberattacks on state, local, tribal, and territorial governments rose by an average of nearly 50 percent.12 In a March 2021 report, the nonprofit K12 Security Information Exchange concluded that 2020 was a “record-breaking” year for technical attacks on education systems.13

State and federal governments have launched a series of legal and policy initiatives to address the growing threat of cyberattacks. In May 2020, the Senate Commerce Committee approved the Cybersecurity Competitions to Yield Better Efforts to Research the Latest Exceptionally Advanced Problems (CYBER LEAP) Act of 2020. The legislation would establish incentives to develop innovative practices and technology related to the economics of cyberattacks, cyber training, and federal agency resilience to cyberattacks.14 At the end of the coverage period, the draft measure had not yet been reintroduced in the 2021–22 Congress. In 2020, at least 20 states passed cybersecurity bills, including provisions that increased penalties for cybercrimes, created advisory bodies to provide expert guidance on security issues, and offered support for training and education programs.15

On United States

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    83 100 free
  • Internet Freedom Score

    76 100 free
  • Freedom in the World Status

    Free
  • Networks Restricted

    No
  • Websites Blocked

    No
  • Pro-government Commentators

    No
  • Users Arrested

    Yes