United Kingdom

Free
79
100
A Obstacles to Access 24 25
B Limits on Content 30 35
C Violations of User Rights 25 40
Last Year's Score & Status
79 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

The internet remained free for users in the United Kingdom (UK), with widespread access and few major constraints on content. In line with its European counterparts, the government continued to block two Russia-backed media outlets in an effort to restrict access to their content. Parliament moved towards passage of the Online Safety Bill, a sweeping regulation that would create new obligations for platforms to remove illegal and certain “harmful” content. Policymakers also considered changes to the country’s data protection framework.

The UK—which includes the constituent countries of England, Scotland, and Wales along with the territory of Northern Ireland—is a stable democracy that regularly holds free elections and is home to a vibrant media sector. While the government enforces robust protections for political rights and civil liberties, recent years have seen concerns about increased government surveillance of residents, as well as rising Islamophobia and anti-immigrant sentiment.

header2 Key Developments, June 1, 2022 - May 31, 2023

  • The websites of the Russia-backed outlets Sputnik News and RT both continued to present signs of blocking in the UK (see B1).
  • Lawmakers neared passage of the Online Safety Bill, and it ultimately passed in September 2023, after the coverage period. The law establishes new duties for online services to proactively identify and remove illegal and certain “harmful” content from their platforms, establishing sanctions for noncompliance and generating concerns about protections for anonymity and encryption for UK users (see B3, B6, and C4).
  • The government presented a series of proposals that would modify the country’s data protection framework and depart from the General Data Protection Regulation (GDPR) of the European Union (EU). The most recent, the Data Protection and Digital Information (No. 2) Bill, remained under consideration at the end of the coverage period (see C6).
  • Private businesses—particularly large British companies—remained under threat from cyberattacks, and in 2023 government statistics indicated that almost 70 percent of large businesses had been the victim of an attack within the past year (see C8).

A Obstacles to Access

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

Access to the internet is considered a key element of societal and democratic participation in the UK. Broadband access is almost ubiquitous, and nearly 100 percent of households are within range of asymmetric digital subscriber line (ADSL) connections. All national mobile service providers offer fourth-generation (4G) network technology, and the four largest—EE, O2, Vodafone, and Three—offer fifth-generation (5G) service.

The Digital Economy Act 2017 obliges service providers to offer minimum connection speeds of 10 megabits per second (Mbps).1 In 2022, the proportion of “superfast” home broadband connections, with advertised download speeds of at least 30 Mbps, increased to 91 percent, up from 85 percent the year prior. The proportion of lines with an advertised download speed of 300 Mbps or more increased from 5 percent to 8 percent.2 Fiber-optic coverage was available to 42 percent of UK homes as of December 2022, a 14 percentage point increase since 2021.3

Mobile telephone penetration is extensive. As of December 2022, Ofcom, the primary telecommunications regulator, estimates that 5G connectivity is available from at least one mobile service provider outside at least 67 to 77 percent of premises.4 The country’s four major service providers report that they provide 4G outdoor coverage to approximately 99 percent of premises and 80 to 87 percent of the country’s landmass.

The government’s UK Wireless Infrastructure Strategy, published in April 2023, set the goal of providing standalone 5G service—which does not rely on existing 4G long-term evolution (LTE) infrastructure—to all populated areas of the country by 2030.5

In July 2020, the government banned the purchase of 5G technology from the Chinese telecommunications company Huawei beginning in 2021 and ordered existing Huawei equipment removed by the end of 2027 due to security concerns (see A4).6 Major service providers, including EE, Three, and Vodafone, all use Huawei equipment in their 5G infrastructure; the cost of removing that equipment was estimated at over £500 million ($600 million) in 2020.7 In October 2022, the government issued legal notices to 35 service providers reiterating these mandates, including the deadline to remove all Huawei technology from 5G public networks. The order provides incremental deadlines, outlining the requirement that Huawei technology must be used for no more than 35 percent of the UK’s 5G network traffic by July 31, 2023.8

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 3.003 3.003

Internet access continues to expand, gradually reducing regional and demographic disparities.

The UK provides a competitive market for internet access, and prices for communications services compare favorably with those in other countries. The Economist’s Inclusive Internet Index 2022 ranks the UK fifth overall out of 100 countries surveyed, and first for affordability, defined by cost of access relative to income and the level of internet-market competition.1

According to analytics company Cable, the average cost of one gigabyte (GB) of mobile data in 2022 was £0.67 ($0.80).2 In December 2022, Ofcom reported that the average monthly price for mobile service (excluding handset cost) had dropped by 12 percent in real terms since 2020, based on the cost of a basket of mobile services with average use. In the third quarter of 2022, mobile prices in the UK were less expensive than those in the five peer countries that Ofcom analyzed, France, Germany, Italy, Spain, and the United States.3 However, Ofcom also noted that high inflation helped drive sizable price increases in 2022 for some users. Ofcom estimated that 29 percent of UK households were struggling to afford a communications service in April 2023, though when services were assessed individually only 9 percent of households struggled to afford fixed-line broadband and 7 percent struggled to afford mobile service.4

Meanwhile, the average monthly broadband cost in 2023 was £28.42 ($34.08).5 Several fixed-line broadband providers offer low-cost packages, however, including social tariffs that cost between £12 ($14) or £20 ($24) per month.6 Ofcom has warned that eligible customers may not be aware of social tariffs, so take-up of the affordable packages remained low, at 3.2 percent of eligible households as of August 2022.7

Despite a number of positive trends in the UK, 6 percent of households remain offline. In an April 2021 report, Ofcom noted that 11 percent of lower-income households and 10 percent of the most financially vulnerable lacked access.8

In October 2021, the UK government announced the £5 billion ($6 billion) Project Gigabit to bring faster and more reliable high-speed services to 570,000 rural premises.9 The government issued a progress update in November 2022, reporting that gigabit-capable broadband—that is, service at speeds of more than 1000 Mbps—was available at more than 72 percent of premises in the UK, up from 6 percent in early 2019.10 This figure increased to almost 76 percent of premises by May 2023, according to ThinkBroadband.11

According to 2020 data from the Office for National Statistics, the most recent official information available, virtually all residents between the ages of 16 and 44 are internet users, with most accessing the internet through their mobile device.12 There is almost no gender gap in internet use as of 2020—93 percent of men used the internet compared to 91 percent of women.13

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

The government does not exercise control over the internet infrastructure and does not routinely restrict connectivity.

The government does not place limits on the amount of bandwidth internet service providers (ISPs) can supply, and the use of internet infrastructure is not subject to direct government control. ISPs regularly engage in traffic shaping or slowdowns of certain services, such as peer-to-peer (P2P) file sharing and television streaming. Mobile service providers have previously cut back on unlimited access packages for smartphones, reportedly because of network congestion concerns.

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 5.005 6.006

There are few obstacles to the establishment of service providers in the UK, allowing for a competitive market that benefits users.

Major ISPs, by percentage of household users, include BT (formerly British Telecom) with 24 percent, Sky with 21 percent, Virgin Media with a 17 percent market share, TalkTalk with 8 percent, and other ISPs constituting the remaining 30 percent.1 Ofcom continues to use regulations to promote the unbundling of services so that incumbent owners of infrastructure invest in their networks while also allowing competitors to make use of them.2

ISPs are only required to obtain a license from Ofcom for use of the radioelectric spectrum, such as for mobile internet.3 Other ISPs are not subject to licensing, but they must comply with general conditions set by Ofcom, such as having a recognized code of practice and being a member of a recognized alternative dispute-resolution scheme.4

Among mobile service providers, EE, which has been owned by BT since 2016, leads the market with 26 percent of subscribers, followed by O2 with 19 percent, Vodafone with 15 percent, Three with 9 percent, and Tesco with 7 percent.5 Mobile virtual network operators like Tesco provide service using the infrastructure owned by one of the other companies.

The Telecommunications (Security) Act 2021, which received royal assent that November, to amend the Communications Act 2003, places stronger legal obligations on telecommunications service providers to identify and reduce the risk of cybersecurity breaches and prepare for their occurrence.6 The law empowers the government to use secondary legislation to regulate and issue codes of practice for service providers in pursuit of these goals. Providers who do not comply could receive sanctions of up to 10 percent of their global turnover. Consultation on draft regulations and an associated draft code of practice closed in May 2022.7 Following the consultation process, the Electronic Communications (Security Measures) Regulations 2022 came into force in October 2022 and the accompanying Telecommunications Security Code of Practice was issued in December 2022, providing guidance for complying with the regulations.8 The earliest implementation date for the most straightforward and least resource-intensive measures is March 31, 2024, meaning that the full consequences of the regulations were not observed during the coverage period. The government previously extended this deadline by one year in response to concerns from service providers that it would be onerous to implement the security measures on such a short timeframe.9

The Telecommunications (Security) Act 2021 act also allows the government to issue Designated Vendor Directions (DVDs) regarding high-risk vendors that are deemed threats to national security. The government produced a DVD for Huawei, for instance, when it banned the purchase of Huawei equipment and mandated its eventual removal by service providers (see A1). The act legally enshrines these measures and has been criticized by some legal scholars for potentially limiting market diversity.10

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 4.004 4.004

The various entities responsible for regulating internet service and content in the UK generally operate impartially and transparently.

Ofcom, the primary telecommunications regulator, is an independent statutory body. It has broadly defined responsibility for the needs of “citizens” and “consumers” regarding “communications matters” under the Communications Act 2003.1 It is overseen by Parliament and also regulates the broadcasting and postal sectors.2 Ofcom has some authority to regulate content with implications for the internet, such as video-on-demand content.3 Ofcom will enforce the Online Safety Bill, which remained pending in Parliament at the end of the coverage period (see B3, B6, and C4).4 The appointment of Michael Grade as the Ofcom chair sparked controversy during the previous coverage period. 5 Grade, a former British Broadcasting Corporation (BBC) board chairman, was confirmed as Ofcom’s chairman in April 2022 and began his four-year term in May. Politicians and civil society members have questioned his independence from the government and ruling Conservative Party, as well as his expertise.6

Nominet, a nonprofit company operating in the public interest, manages access to the .uk, .wales, and .cymru country domains.

Other groups regulate services and content through voluntary ethical codes or coregulatory rules under independent oversight. In 2012, major ISPs published a Voluntary Code of Practice in Support of the Open Internet, which commits ISPs to transparency and confirms that traffic management practices will not be used to target and degrade competitors’ services.7 Amendments to the code clarify that signatories could deploy content filtering for public Wi-Fi access.8 Ofcom also maintains voluntary codes of practice related to internet speed provision, dispute resolution, and the sale and promotion of internet services.9

Criminal online content is addressed by the Internet Watch Foundation (IWF), an independent self-regulatory body funded by Nominet and industry associations (see B3).10 The Advertising Standards Authority and the Independent Press Standards Organisation regulate newspaper websites.

The Digital Regulation Cooperation Forum (DRCF), formed in July 2020 by the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), and Ofcom, and later joined by the Financial Conduct Authority, was created to promote greater cooperation between entities on online regulatory matters.11

B Limits on Content

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content, particularly material that is protected by international human rights standards? 4.004 6.006

Blocking generally does not affect political and journalistic content or other internationally protected forms of online expression. Service providers block and filter some content that falls into one of three categories: copyright infringement, promotion of terrorism, and depiction of child sexual abuse. Optional filtering can be applied to additional content, particularly material that is considered unsuitable for children.

According to measurements conducted by the Open Observatory for Network Interference (OONI), Sputnik and RT both showed signs of blocking around March 2022 in several European countries, including the UK. Both sites continued to present signs of blocking in the UK through the end of the current coverage period.1

In October 2019, the government dropped plans for automated age verification for online pornography, set to enter into force in April 2019, 2 deeming it technically infeasible.3 The Digital Economy Act 2017 includes provisions that allow blocking of “extreme” pornographic material, setting standards that critics said were poorly defined and could be unevenly applied.4 In February 2022, the government announced that the legal duty requiring age-verification controls for online pornography, with the threat of blocking in cases of noncompliance, would be introduced to the draft Online Safety Bill (see B3, B6, and C4).5 These duties remained in place in the draft bill under consideration at the end of the coverage period.6

ISPs are required to block domains and URLs found to be hosting material that infringes copyright when ordered to do so by the High Court (see B3).7

URLs based overseas that host content that has been reported by police for violating the Terrorism Act 2006, which prohibits the glorification or promotion of terrorism, are included in the optional child filters supplied by many ISPs.8 “Public estates” like schools and libraries also block such URLs.9 The content can still be accessed on private devices.10

ISPs block URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content in accordance with the Internet Services Providers’ Association’s voluntary code of practice (see A5).11 Mobile service providers also block URLs identified by the IWF as containing such content.

All mobile service providers and some ISPs that provide home service filter legal content that is considered unsuitable for children.12 Mobile service providers enable these filters by default, requiring customers to prove that they are over age of 18 to access the unfiltered internet.13 Content considered suitable only for adults includes the promotion of illegal drugs, sex education, and discriminatory language. Website owners can check whether their sites are filtered under one or more category, or report overblocking, by emailing the industry-backed nonprofit group Internet Matters,14 though the process and timeframe for correcting mistakes varies by provider.

These optional filters can affect a range of legitimate content pertaining to public health, LGBT+ topics, drug awareness, and even information published by civil society groups and political parties (see B3).15 A 2014 Ofcom report found that ISPs include “proxy sites, whose primary purpose is to bypass filters or increase user anonymity, as part of their standard blocking lists.”16 For instance, the proxy website anonymouse.org was blocked on certain networks during the coverage period.17

Blocked, a site operated by the Open Rights Group, allows users to test the accessibility of websites and report excessive blocking and optional filtering by both home broadband and mobile internet providers.18 As of March 2021, more than 775,000 sites were reported blocked or filtered, more than 21,000 of which were suspected to be blocked inadvertently.19 They included sites related to advice for abuse victims, addiction counseling, LGBT+ subjects, and school websites.20

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content, particularly material that is protected by international human rights standards? 3.003 4.004

Political, social, and cultural content is generally not subject to forced removal, though excessive enforcement of rules against illegal content can affect protected speech (see B1). The government continues to develop regulations that would compel platforms to restrict content that is deemed harmful to children, but not necessarily illegal (see B3, B6, and C4).

In March 2022, after the EU banned broadcasts, sharing of social media content, and app downloads from RT and Sputnik, then culture secretary Nadine Dorries requested that Facebook, Twitter, and TikTok block access to the outlets’ content in the UK as well (see B1).1 Meta reported that it would restrict access to both outlets across the UK.2 Ofcom revoked RT’s broadcasting license later that month.3

The Terrorism Act calls for the removal of online material hosted in the UK if it “glorifies or praises” terrorism, could be useful to terrorists, or incites people to carry out or support terrorism. As of April 2019, the police’s Counter-Terrorism Internet Referral Unit (CTIRU), which compiles lists of such content, reported that 310,000 pieces of material had been taken down since 2010.4

When child sexual abuse images or criminally obscene adult materials are hosted on servers in the UK, the IWF coordinates with police and local hosting companies to have it taken down. When content is hosted on servers overseas, the IWF coordinates with international hotlines and police to have the offending content taken down in the host country.5 Similar processes exist under the oversight of True Vision, a site that is managed by the National Police Chiefs’ Council (NPCC), for the investigation of online materials that incite hatred.6

In 2019, the European Court of Justice (ECJ) ruled that search engines do not have to apply the right to be forgotten—the removal of links from search results at the request of individuals if the stories in question are deemed to be inadequate or irrelevant—for all global users after receiving an appropriate request to do so in Europe.7 In April 2018, the UK’s High Court ordered Google to delist search results about a businessman’s past criminal conviction in its first decision on the right to be forgotten. In another case, the court rejected a similar claim made by a businessman who was sentenced for a more serious crime.8

Despite ending membership in the EU, the British government and data protection regulator, the Information Commissioner’s Office (ICO), committed to implementing the EU’s GDPR,9 which came into force in May 2018 (see C6). The right to be forgotten, along with other rights enshrined in the GDPR, will continue to apply in the UK under the UK GDPR.10

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 3.003 4.004

The regulatory framework and procedures for managing online content are largely proportional, transparent, and open to correction. However, the optional filtering systems operated by ISPs and mobile service providers—particularly those meant to block material that is unsuitable for children—have been criticized for a lack of transparency, inconsistency between providers, and excessive application that affects legitimate content.

In May 2021, the government published the Online Safety Bill (see B6 and C4), which proposes a new regulatory framework to compel search engines and online platforms to address and remove illegal and certain harmful content under the statutory duties of care, defined as an obligation “to moderate user-generated content in a way that prevents users from being exposed to illegal and harmful content online.”1 A revised draft was introduced in the House of Commons in March 2022 after a period of parliamentary scrutiny.2 The House of Commons completed its third reading of the bill in January 2023, and the House of Lords began scrutinizing it the same month and continued through the end of the coverage period.3 Parliament passed the Online Safety Bill in September 2023, after the coverage period.4

The bill, as considered during the coverage period, would apply to illegal content and “content that is harmful to children.” Provisions that would have mandated the removal of content that is “legal but harmful” to adults were dropped from the draft bill in November 2022.5 Illegal content includes child sexual exploitation and abuse (CSEA), terrorist content, and additional content specified in Schedule 7 of the bill. The bill broadly defines “content that is harmful to children” as content that presents a “material risk of significant harm to an appreciable number of [users] in the United Kingdom.”6 While the bill does not expressly require the use of automated content-moderation tools, Ofcom may order online services to use “accredited technology” to remove content related to terrorism or CSEA, and to “swiftly take down content.” The revised bill grants Ofcom the power to issue relevant notices “when necessary and proportionate.”7 Such provisions have the potential to undermine end-to-end encryption (see C4).

The proposed legislation targets services including search engines and “user-to-user” services, defined as an internet service that hosts user-generated content or facilitates public or private interaction between at least two people.8 After provisions related to “legal but harmful” content were dropped from the bill, certain platforms designated as “Category 1” services, based on their function and number of users, will instead be required to provide optional “user empowerment” tools; these tools would allow adult users to filter certain harmful content, including that which promotes self-harm or incites hatred.9 Those services would also be required to protect “journalistic content,” which seemingly includes content created by independent journalists, and “content of democratic importance,” broadly defined as content that “appears to be specifically intended to contribute to democratic political debate in the United Kingdom or a part or area of the United Kingdom.”10 Protections for “journalistic content” would include an expedited content-removal appeals process for journalists.11 A proposed change to the bill announced in July 2022 would bar platforms from removing news content until an appeals process concludes.12

For noncompliant services, Ofcom would be empowered to issue notices and fines up to £18 million ($22 million) or 10 percent of global turnover, whichever is higher. In a December 2020 publication, the government noted that these fines would make it “less commercially viable” for services to operate in the UK, thus forcing companies to comply.13 Additionally, Ofcom would have the ability to request that a court order an interim or permanent suspension of service for noncompliant platforms.14 If the suspension orders are deemed ineffective, Ofcom would then be empowered to petition a court for interim and long-term access-restriction orders.15 Ofcom would also be responsible for drafting relevant codes of practice for compliance with duties established in the bill.16 In January 2023, the government introduced a new criminal offence for senior tech executives and managers who fail to comply with Ofcom’s requirements in relation to child safety duties.17

In June 2023, after the coverage period, several amendments to the Online Safety Bill remained pending while the bill was under consideration in the House of Lords.18

Under the Digital Economy Act 2017, ISPs are legally empowered to use both blocking and filtering methods, if allowed by their terms and conditions of use.19 Civil society groups have criticized the default filters used by ISPs and mobile service providers to review content deemed unsuitable for children, arguing that they lack transparency and affect too much legitimate content, which makes it difficult for consumers to make informed choices and for content owners to appeal wrongful blocking (see B1).

ISPs block URLs using content-filtering technology known as Cleanfeed, which was developed by BT in 2004. The process involves deep packet inspection (DPI), a granular method of monitoring traffic that enables blocking of individual URLs rather than entire domains.

ISPs are notified about websites hosting content that has been determined to violate or potentially violate UK law under at least three different procedures. The IWF compiles and distributes a list of specific URLs containing photographic or computer-generated depictions of CSEA or criminally obscene adult content to ISPs;20 the CTIRU compiles an unpublished list of URLs hosted overseas that contain material considered to glorify or incite terrorism under the Terrorism Act 2006, which are filtered on public-sector networks;21 and the High Court can order ISPs to block websites found to be infringing copyright under the Copyright, Designs, and Patents Act 1988.22 Copyright-related blocking has been criticized for its inefficiency and opacity.23

In some cases, mobile service providers’ filtering activity may be outsourced to third-party contractors, further limiting transparency.24 Child-protection filters are enabled by default in mobile internet browsers, though users can disable them by verifying that they are over the age of 18. Mobile virtual network operators are believed to be capable of using their parent service’s filtering infrastructure.25 O2 allows its users to check how a particular site has been classified.26 The filtering is based on a classification framework for mobile content published by the British Board of Film Certification (BBFC), the designated age-verification regulator.27 The BBFC adjudicates appeals from content owners and publishes the results quarterly.28

Website owners and companies that knowingly host illicit material and fail to remove it may be held liable, even if the content was created by users—an intermediary liability regime the British government has continued to uphold from EU Directive 2000/31/EC (the E-Commerce Directive).29 Updates to the Defamation Act, effective since 2014, limit companies’ liability for user-generated content that is considered defamatory. However, the Defamation Act only offers protection from private libel suits based on third-party postings if the plaintiff is able to identify the user responsible for the allegedly defamatory content.30 The act does not specify what sort of information the website operator must provide to plaintiffs, but it raised concerns that websites would register users and restrict anonymity in order to avoid civil liability.31

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Self-censorship, though difficult to assess, is not understood to be a widespread problem in the UK. However, due to factors including the government’s extensive surveillance practices, it appears likely that some users censor themselves when discussing sensitive topics to avoid potential government intervention or other repercussions (see C5).1

In March 2023, BBC soccer broadcaster Gary Lineker was temporarily suspended from his broadcasting duties after he criticized the government’s asylum policies in a Twitter post.2 Some considered the BBC’s decision to suspend Lineker—which cited impartiality guidelines—to be an attack on free expression.3

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 4.004 4.004

Concerns about content manipulation increased between 2016 and 2020 amid Brexit and general elections in 2017 and 2019, with foreign, partisan, and extremist groups allegedly using automated “bot” accounts, fabricated news, and altered images to shape discussions on social networks. Though concerns have somewhat abated since 2020, coordinated inauthentic behavior has continued to occur in recent years.

According to Meta, the UK was targeted with the third-highest number of coordinated inauthentic behavior networks in the world, after the United States and Ukraine, between 2017 and 2022.1 In November 2021, Meta reported removing a network of 524 Facebook accounts, 20 pages, 4 groups, and 86 Instagram accounts that originated in China and targeted English-speaking audiences in the UK and the United States. Meta found links to employees of a Chinese information-security firm and individuals associated with Chinese state-owned infrastructure firms.2 In December 2021, Meta reported removing a network of 8 Facebook accounts and 126 Instagram Iran-based accounts that primarily targeted Scotland and the UK as a whole. The accounts, which employed popular hashtags promoting Scottish independence, posed as English or Scottish citizens and produced and amplified political content including criticism of the UK government. They also tried to contact policymakers.3

During the most recent general election in December 2019, both the governing Conservative Party and opposition Labour Party spread misleading content and disinformation on social media, including doctored videos shared by the Conservative Party.4 Google banned eight Conservative ads for “violating Google’s advertising policy”; six of which were related to a website created by Conservative Party officials that imitated the Labour party’s election manifesto.5 In July 2020, the UK government said that the Russian government had tried to influence the 2019 election by illicitly acquiring sensitive US-UK trade documents and distributing them on the social media platform Reddit.6 The online environment was also allegedly manipulated surrounding the 2016 Brexit referendum and the June 2017 elections, adding to the polarization of online political discourse.7

The government runs a counter disinformation campaign called SHARE—previously known as Don’t Feed the Beast—that provides users with a checklist of features to note before sharing posts and media online.8 Additionally, in 2021, the government published the RESIST 2 Toolkit for civil servants and other stakeholders to help protect their audiences and defend their organization against the threat of mis- and disinformation.9 The Online Safety Bill would empower Ofcom to create a disinformation and misinformation advisory committee to provide guidance to online services (see B3 and B6).10

During the coverage period, the digital rights group Big Brother Watch raised concerns that several government units meant to combat online disinformation had monitored social media to seemingly track posts critical of the government (see C5).

In March 2023, leaked emails and WhatsApp messages from 2020–22 appeared to show evidence that the Conservative government had pressured the BBC related to the use of the word ‘lockdown’ during the early COVID-19 pandemic and asked journalists to be more critical of the opposition Labour Party. One source at the BBC alleged that the government had directly influenced headlines published on the BBC’s website “on a very regular basis.”11

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

Online media outlets face economic constraints that negatively impact their financial sustainability, but these are the result of market forces, not political intervention.

Publishers have struggled to find a profitable model for their digital platforms, though more than half of the population reportedly consumes news online. In 2022, a survey conducted for Ofcom found that 66 percent of adults used the internet to access news, with social media being the most popular online source.1

Ofcom is responsible for enforcing the EU’s 2015 Open Internet Regulation, which includes an obligation for ISPs to ensure net neutrality—the principle that internet traffic should not be throttled, blocked, or otherwise disadvantaged on the basis of content. This regulation was revised slightly but largely preserved in UK law after the country's exit from the EU was finalized in 2020.2

The Online Safety Bill (see B3 and C4), which remained pending at the end of the coverage period, would empower Ofcom to fine online services the greater of £18 million ($22 million) or 10 percent of their global turnover if they do not comply with the bill’s provisions, which could impact their ability to operate in the UK.

In March 2023, the government published a draft media bill and presented it to Parliament. Among its provisions, the draft bill would empower Ofcom to regulate and sanction video-on-demand services, such as Netflix and Disney+, in line with traditional broadcasters.3

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity and reliability? 4.004 4.004

The online information landscape is diverse and lively. Users have access to the online content of virtually all national and international news organizations. While there are a range of sources that present diverse views and appeal to various audiences and communities, the ownership of leading news outlets is relatively concentrated,1 and particular media groups have been accused of political bias.

The publicly funded BBC, which maintains an extensive online presence, has an explicit diversity and inclusion strategy, which aims to increase the representation of women and LGBT+ people, as well as different age ranges and ethnic and religious groups.2 Similar models have been adopted by other national broadcasters.3

In recent years, the CMA has endeavored to boost competition among digital platforms. In June 2022, the CMA vowed to examine the dominance of Apple and Google’s mobile browsers, citing their “effective duopoly“ on the mobile environment.4

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

Online mobilization tools are freely available and commonly used to organize offline,1 and collective action continues to grow in terms of both numbers of participants and numbers of campaigns. Some groups use digital tools to document and combat bigotry, including Tell MAMA (Measuring Anti-Muslim Attacks), which tracks reports of attacks or abuse submitted by British Muslims online.2 Petition and advocacy platforms such as 38 Degrees and Avaaz have emerged, and nongovernmental organizations (NGOs) view online communication as an indispensable part of any campaign strategy.

Prominent recent campaigns include Open Rights Group’s “Don’t Scan Me!” campaign against Clause 110 of the Online Safety Bill and its potential to weaken encryption (see C4),3 as well as Big Brother Watch’s crowdfunding and user mobilization campaign for legal action against certain government units, ostensibly meant to combat online disinformation, that have allegedly been used in recent years to monitor political dissent (see C5).4

C Violations of User Rights

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 5.005 6.006

The UK does not have a written constitution or similarly comprehensive legislation that defines the scope of governmental power and its relation to individual rights. Instead, constitutional powers and individual rights are addressed in common law as well as various statutes and conventions. The provisions of the European Convention on Human Rights were adopted into law through the Human Rights Act 1998.

In December 2021, the government launched a consultation on reforming the Human Rights Act.1 In June 2022, the government published the Bill of Rights Bill, which would repeal and replace the Human Rights Act.2 It includes significant changes to the UK’s human rights framework, requiring claimants to prove that they have suffered “significant disadvantage” and giving Parliament, rather than the courts, primacy in decision-making when competing rights and interests are at stake. The bill maintains that courts must give “great weight” to the importance of freedom of speech, but also establishes exemptions in some areas, including criminal proceedings and matters relating to immigration, citizenship, and national security.3 In January 2023, Parliament’s Joint Committee on Human Rights recommended that the government make significant changes to the bill or withdraw it entirely, saying that it would significantly weaken the protections offered by the Human Rights Act.4 The government officially scrapped the bill in June 2023, after the coverage period.5

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities, particularly those that are protected under international human rights standards? 2.002 4.004

Political expression and other forms of online speech or activity are generally protected, but there are legal restrictions on hate speech, online harassment, and copyright infringement. Some measures—including a 2019 counterterrorism law—could be applied in ways that violate international human rights standards.

The Counter-Terrorism and Border Security Act, which received royal assent in February 2019, included several provisions related to online activity (see C5).1 Individuals can face up to 15 years in prison for viewing or accessing material that is useful or likely to be useful in preparing or committing a terrorist act, even if there is no demonstrated intent to commit such acts. The law includes exceptions for journalists or academic researchers who access such materials in the course of their work, but it does not address other possible circumstances in which access might be legitimate.2 “Reckless” expressions of support for banned organizations are also criminalized under the law. A number of NGOs argued that the legislation was dangerously broad and that its unclear definitions could be abused.3 In April 2021, the Countering Terrorism and Sentencing Act, which establishes prison sentences of up to 14 years for anyone who “supports a proscribed terrorist organization,” received royal assent.4

Stringent bans on hate speech are included in a number of laws, and some rights groups have said they are too vaguely worded.5 Defining what constitutes an offense has been made more difficult by the development of new communications platforms. One ban included in Section 127 of the Communications Act 2003 punishes “grossly offensive” communications sent through the internet.6 The maximum penalty is an unlimited fine and six months in prison.

The Online Safety Bill, as it was considered during the coverage period, would amend these provisions. The draft bill also designated new offenses for knowingly false, persistent, and threatening communications and “cyberflashing,” when an individual sends an unsolicited intimate image to another.7 Such offenses can be punished by a fine or imprisonment, or both.8 Commentators have criticized these portions of the bill for their vagueness, which may impact speech.9

The Crown Prosecution Service (CPS) publishes specific guidelines for the prosecution of crimes committed through social media and other online communications platforms.10 Updates in 2014 placed digital harassment offenses committed with the intent to coerce victims into sexual activity under the Sexual Offences Act 2003, which carries a maximum of 14 years in prison.11 Revised guidelines issued in March 2016 identified four categories of communications that are subject to possible prosecution: credible threats; abusive communications targeting specific individuals; breaches of court orders; and grossly offensive, false, obscene, or indecent communications.12 In October 2016, the CPS updated its guidelines again to cover more abusive online behaviors, including organized harassment campaigns or “mobbing,” and doxing, the deliberate and unauthorized publication of personal information online to facilitate harassment.13

The Copyright, Designs, and Patents Act 1988 carries a maximum two-year prison sentence for offenses committed online. In 2015, the government held a public consultation regarding a proposal to increase the sentence to 10 years, which was ultimately incorporated into the Digital Economy Act 2017.

In March 2021, the Scottish parliament passed the Hate Crime and Public Order (Scotland) Bill, through which lawmakers aimed to extend and modernize existing hate crime legislation; it became law in April 2021. The law creates criminal offenses for speech and acts intentionally “stirring up hatred” against groups based on protected characteristics, including age, disability, race, religion, sexual orientation, and transgender identity.14 Violators of the law face up to 12 months’ imprisonment and a fine for summary conviction, and up to 7 years for a conviction by jury trial. Civil society groups, including the Open Rights Group, have raised concerns that the law has a wide area of responsibility and low threshold for prosecution,15 particularly noting that criteria for “insult” is not clearly defined and could make sharing online material that is offensive a crime.16

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities, particularly those that are protected under international human rights standards? 5.005 6.006

Police have arrested internet users for promoting terrorism, issuing threats, or engaging in racist abuse or other hate speech. In some past cases, the authorities have been accused of overreach in their enforcement efforts.1 Prison sentences for political, social, and cultural speech remain rare.2

In February 2023, a man was sentenced to 16 weeks in prison and suspended for 18 months after he sent a threatening email to MP Jeremy Hunt in October 2022. The email stated that Hunt’s “house will be on fire this winter.”3 In March 2022, during the previous coverage period, Twitter user Joseph Kelly was sentenced to 150 hours of community service and 18 months of supervision under Section 127 of the Communications Act 2003 for posting a “grossly offensive” tweet about a British Army officer. Kelly’s post said that “the only good Brit soldier is a [dead] one” in February 2021, on the day following the officer’s death.4

Local police departments have the discretion to pursue criminal complaints in cases that would be treated as civil offenses in many democracies. The NPCC operates True Vision, an online portal to facilitate the reporting of hate crimes to law enforcement.5

Cases of offensive humor have been prosecuted, including during the coverage period. In June 2022, a former officer for the West Mercia Police was convicted on charges of “sending an offensive, indecent, obscene or menacing image via a public electronic communications network,” after he posted 10 racist memes, some of which mocked the death of George Floyd, in a WhatsApp group chat in 2020. The man, who was an officer at the time the messages were sent, was sentenced to 20 weeks in jail.6

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 2.002 4.004

Users are not required to register to obtain a SIM card, allowing for the anonymous use of mobile devices.1 However, some laws provide authorities with the means to undermine encryption, and pending legislation could facilitate additional restrictions.

There are several laws that could allow authorities to compel decryption or require a user to disclose passwords, including the Regulation of Investigatory Powers Act 2000 (RIPA), the Terrorism Act 2000, and the Investigatory Powers Act 2016 (IP Act) (see C5 and C6).2 Although such powers are seldom invoked in practice, some users have faced detention for failing to provide passwords.3

In October 2019, then home secretary Priti Patel and her counterparts in the United States and Australia wrote to Facebook opposing the company’s plans to implement end-to-end encryption across its messaging platforms.4 The letter followed communiques in July and October 2019 from the Five Country Ministerial, a Five Eyes consortium of which the UK is a member, criticizing technology companies that provide encrypted products that preserve anonymity and preclude law enforcement access to content.5 In October 2020, the Five Eyes along with the Indian and Japanese governments issued a statement requesting backdoor access to encrypted messages.6 In April 2021, Patel gave a speech at the National Society for the Prevention of Cruelty to Children in which she urged Facebook and other platforms to consider encryption’s impact on “public safety” and provide mechanisms for law enforcement to access encrypted conversations.7 In January 2022, the No Place to Hide campaign, backed by the UK government, was launched to raise awareness about the alleged danger that encrypted messaging purportedly poses to children and prevent Facebook from expanding its use of end-to-end encryption.8

In November 2021, the UK government announced five winning projects of the Safety Tech Challenge Fund, which aims to combat the sexual abuse and exploitation of children online in encrypted environments without impacting people’s rights to privacy and data protection.9

The Online Safety Bill, which would require age verification for access to online pornography, has ignited civil society concerns over its potential to compromise anonymity and encryption.10 Under the bill, Ofcom can mandate that online services employ government-approved software to find images depicting CSEA (see B3).11 These orders, which can be issued to services that use end-to-end encryption and consequently cannot technically inspect user messages, have been criticized as an attempt to compel companies to abandon or compromise their encryption systems.12 In response, leading messaging apps—including Signal, Element, WhatsApp and Viber—issued an open letter in April 2023 that warned against this attempt to weaken end-to-end encryption, threatening to not offer their services to users in the UK if the final bill includes such provisions.13 A number of amendments addressing the matter were pending in the House of Lords at the end of the coverage period.

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

UK authorities are known to engage in surveillance of digital communications, including mass surveillance, for intelligence, law enforcement, and counterterrorism purposes. A 2016 law introduced some oversight mechanisms to prevent abuses, but it also authorized bulk collection of communications data and other problematic practices. A 2019 counterterrorism law empowered border officials to search travelers’ devices, undermining the privacy of their online activity.

The Counter-Terrorism and Border Security Act (see C2) gives border agents the ability to search electronic devices at border crossings and ports of entry with the aim of detecting “hostile activity”—a broad category including actions that threaten national security, threaten the economic well-being of the country in a way that touches on security, or are serious crimes.1 Those stopped are required to provide information when requested by border officers, including device passwords.2 In April 2023, French book publisher Ernest Moret was arrested by British border officials after he refused to provide the passwords to his phone and computer; Moret had been stopped over his role in antigovernment protests in France.3 In July, after the coverage period, an independent review found that British authorities did not have reasonable cause to detain Moret or demand his passwords.4

The IP Act codified law enforcement and intelligence agencies’ surveillance powers, which had previously existed in multiple statutes and authorities, in a single omnibus law.5 It covers interception, equipment interference, and data retention, among other topics.6 The IP Act has been criticized by industry associations, civil rights groups, and the wider public, particularly for the range of powers it authorizes and its legalization of bulk data collection.7

The IP Act specifically enables the bulk interception and acquisition of communications data sent or received by individuals outside the UK, as well as bulk equipment interference involving “overseas-related” communications and information. When both the sender and receiver of a communication are in the UK, targeted warrants are required, though several individuals, groups, or organizations may be covered under a single warrant in connection with a single investigation.8 Part 7 of the IP Act introduced warrant requirements for intelligence agencies to retain or examine “personal data relating to a number of individuals” who are “unlikely to become of interest to the intelligence service in the exercise of its functions.”9

The IP Act established a new commissioner appointed by the prime minister to oversee investigatory powers under Section 227.10 The law includes other safeguards, such as “double-lock” interception warrants. These require approval from both the relevant secretary of state and an independent judge, though the secretary alone can approve urgent warrants.11 The act allows authorities to prohibit telecommunications providers from disclosing the existence of a warrant. Intercepting authorities that may apply for targeted warrants include police commissioners, intelligence service heads, and revenue and customs commissioners.12 Applications for bulk interception, bulk equipment interference, and bulk personal dataset warrants can only be made to the secretary of state “on behalf of the head of an intelligence service by a person holding office under the Crown” and must be reviewed by a judge.

Bulk surveillance is an especially contentious issue in the UK because intelligence agencies developed secret programs under older laws that bypassed oversight mechanisms and possible means of redress for affected individuals. These programs affected an untold number of people within the UK, even if they were meant to have only foreign targets. Tempora, a secret surveillance project documented in the Snowden leaks, is one example. A number of other legislative measures authorized surveillance,13 including RIPA.14 RIPA was not repealed by the IP Act, though many of its competences were transferred to the newer legislation. A clause within Part I of RIPA allowed the foreign or home secretary to sign off on bulk surveillance of communications data arriving from or departing to foreign soil, providing the legal basis for Tempora.15 Since the UK’s fiber-optic network often routes domestic traffic through international cables, this provision legitimized mass surveillance of UK residents.16 Working with telecommunications companies, the Government Communications Headquarters (GCHQ) installed interception probes at the British landing points of undersea fiber-optic cables, giving it direct access to data carried by hundreds of cables, including private calls and messages.17

In May 2021, the High Court ruled that security agencies cannot use “general warrants,” outlined in Section 5 of the 1994 Intelligence Services Act, to order the hacking of computers or mobile devices. For example, under a “general warrant,” a security agency could request information from “all mobile phones used by members of a criminal network” to justify the hacking of these devices without having to obtain a specific warrant for each individual in the network. The ruling came after Privacy International, a UK-based NGO, challenged a 2016 decision from the Investigative Powers Tribunal that held that the government could use these warrants to hack computers or mobile devices.18

UK authorities have been known to monitor social media platforms.19 The Online Hate Speech Dashboard, a joint project led by the National Online Hate Crime Hub of the NPCC and Cardiff University, received £1 million ($1.2 million) in 2018 to use artificial intelligence for real-time monitoring of social media platforms meant to identify hate speech and “preempt hate crimes.”20

Reporting from October 2021 detailed the recent expansion of the Metropolitan Police Service’s social media monitoring operations between September 2020 and July 2021. A database used by the Project Alpha Team, which was created in 2019 and uses covert methods to monitor social media platforms, compiles information gathered from both public and private social media accounts; the number of categories of data being gathered more than doubled, from 16 to 34, during that time. While authorities claim that Project Alpha’s goal is to combat online gang-related content, civil society groups warned of potential privacy violations and online racial profiling.21 Similar concerns arose when in June 2022, reports emerged that the Metropolitan Police Service was gathering “children’s personal data” from social media, specifically 15-to-21-year-old men and boys, as part of a broader profiling project.22

In February 2023, then defense secretary Ben Wallace announced a probe into a whistleblower’s claims that the 77th Brigade—an information operations unit of the British Army—had secretly monitored social media posts about COVID-19 in the UK. Wallace admitted that the unit uses content on social media “to assess UK disinformation trends.”23 A January 2023 investigation by the digital rights group Big Brother Watch documented four additional government “disinformation units” that had reportedly been used to monitor the social media activities of users in the UK—including those who criticized the government—raising concerns about the government’s surveillance capabilities and its transparency surrounding such practices.24

C6 1.00-6.00 pts0-6 pts
Does monitoring and collection of user data by service providers and other technology companies infringe on users’ right to privacy? 3.003 6.006

Companies are required to capture and retain user data under certain circumstances, though the government issued regulatory changes in 2018 to address flaws in the existing rules. While the government has legal authority to require companies to assist in the decryption of communications, the extent of its use and efficacy in practice remains unclear.

The UK has incorporated the GDPR into domestic law through the Data Protection Act 2018.1 The GDPR was envisioned as a way to regulate data protection within the UK after the country’s exit from the EU. In September 2021, however, the government published the result of a consultation that envisioned a significant departure from the GDPR.2 The Data Protection and Digital Information Bill was introduced in the House of Commons in July 2022 and ultimately withdrawn in March 2023.3 That same month, however, the government introduced the Data Protection and Digital Information (No. 2) Bill. Compared to the GDPR, the bill would loosen requirements for companies to complete data protection impact assessments, instead focusing on actions to address “high risk” processing; modify some data subject rights; and establish new standards for cross-border personal data transfers, meant to ensure that a recipient country’s data protection standards are “not materially lower” than the UK’s.4 In May 2023, the information commissioner praised the second iteration of the bill as a clear improvement over the first, noting that it modified provisions in the first version that could have undermined the independence of the data protection authority, among other changes.5

Data retention provisions under the IP Act allow the secretary of state to issue notices requiring telecommunications providers to capture information about user activity, including browser history, and retain it for up to 12 months. The Data Retention and Investigatory Powers Act 2014 (DRIPA), the older law on which the IP Act requirement was modeled, was ruled unlawful in the UK and the EU in 2015.6 In January 2018, the Court of Appeal described DRIPA as being inconsistent with European law, since the data collected and retained were not limited to the purpose of fighting serious crime.7 In April 2018, the High Court ruled that part of the IP Act’s data retention provisions similarly violated EU law, and that the government should amend the legislation by November 2018.8

In response, the government issued the Data Retention and Acquiring Regulations 2018, which entered into force in October 2018. The regulations limited the scope of the government’s collection and retention of data and enhanced the transparency of the process.9 Furthermore, a newly created Office for Communications Data Authorisations would oversee data requests and ensure that official powers are used in accordance with the law.

According to a March 2021 report, the government issued orders under the IP Act requiring two service providers to install surveillance technology that would record users’ web history, creating an internet connection record (ICR).

Another problematic provision of the IP Act enables the government to order companies to decrypt content, though companies have recently indicated that they would be unwilling or unable to comply with orders that weaken encryption (see C4).10 Under Section 253, technical capability notices can be used to impose obligations on telecommunications operators both inside and outside the country “relating to the removal … of electronic protection applied by or on behalf of that operator to any communications or data,” among other requirements. The approval process for issuing a technical capability notice is similar to that of an interception warrant.11 In March 2018, after consultations with the industry and civil society groups,12 the government issued the Investigatory Powers (Technical Capability) Regulations 2018, which governs how the notices are issued and implemented.13 The regulations specify companies’ responsibilities in ensuring that they are able to comply with lawful warrants for communications data.

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in relation to their online activities? 4.004 5.005

There were no reported instances of violence against internet users in reprisal for their online activities during the coverage period, though cyberbullying and harassment against women are widespread.1 According to a UK study conducted in February 2023, more than 10 percent of the 4,000 women and girls surveyed reported that they had experienced online violence—including threats, abusive messages, and the nonconsensual sharing of intimate images.2 Online harassment of Muslims and other minorities is also a significant problem.3

Women public officials continue to face harassment and abuse online. A June 2018 study found that one in three female members of parliament had experienced online abuse, harassment, or threats; research from September 2021 confirmed that women and minority members of parliament were at particular risk of receiving social media messages containing stereotypes about their identity or questioning their role as politicians.4

Online harassment worsened during the COVID-19 pandemic, particularly for women and people of Chinese descent. Support services reported a surge in reports of cyberstalking and online harassment.5 Racist incidents involving people of Chinese or other Asian descent were reported throughout the UK, including several cases involving social media.6

A June 2021 press report, meanwhile, revealed that English soccer players were targeted with racist messages during the Euro 2020 tournament.7 More than 2,000 abusive messages were directed at the team during the three group stage matches, 44 of which used explicitly racist language.8

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 2.002 3.003

NGOs, media outlets, and activists are generally not targeted for technical attacks by government or nonstate actors, though such attacks sometimes occur. Financially motivated fraud and hacking continue to present a challenge to authorities and the private sector. In the government’s 2023 cybercrime survey, 32 percent of businesses reported that they experienced a cyberattack in the past year, including 69 percent of large businesses. Of the businesses that experienced a cyberattack, 79 percent reported phishing attempts, compared to just 7 percent that reported denial-of-service (DoS) attacks.1

In January 2023, British postal service Royal Mail was the victim of a ransomware attack by LockBit, a ransomware strain linked with Russia.2 Royal Mail refused to pay the $80 million ransom that the attacker demanded and was forced to stop all international deliveries at post offices for almost six weeks due to the attack.3

Also in January, the Guardian reported that it was the victim of a ransomware attack in December 2022. The newspaper said that UK- and US-based employee data was accessed in the attack. Though the Guardian was able to continue publishing online and in print, it had to close its offices for several months.4 Executives indicated they did not believe the newspaper was intentionally targeted because it is a media outlet.5

In February 2022, during the previous coverage period, Reuters reported that the Foreign, Commonwealth, and Development Office (FCDO) was the target of a “serious cybersecurity incident” earlier in the year, citing government tender documents. The government did not disclose details.6

In October 2021, an industry body reported that coordinated distributed denial-of-service (DDoS) attacks had been employed against several UK-based voice over internet protocol (VoIP) providers. Officials reported that the attacks, which had occurred over four weeks, appeared to be extortion attempts.7

According to Microsoft’s 2022 Digital Defense Report, the UK was the second-most targeted country in the world for Russian and Chinese state and state-affiliated threat actors between July 2021 and June 2022, trailing only the United States.8

In May 2022, the FCDO, citing the UK, United States, EU, and other actors, accused the Russian government of orchestrating a cyberattack “with Europe-wide impact” immediately before launching its invasion of Ukraine.9 Cyberattacks against privately owned critical infrastructure increased by 72 percent as of May 2022.10

In July 2020, a report to Parliament stated that Moscow-affiliated actors had hacked into the UK national infrastructure and launched phishing attacks against various government departments.11 The government responded that while Moscow’s capabilities represented a threat, they noted that there was no evidence of Russian interference in the 2019 election.12

In January 2023, the National Crime Agency (NCA) reported that it had collaborated with US and German law enforcement to shut down the ransomware threat HIVE. Since June 2021, HIVE has extracted more than $100 million in ransom payments from more than 1,300 victims internationally, including approximately 50 corporate victims in the UK.13

On United Kingdom

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    93 100 free
  • Internet Freedom Score

    79 100 free
  • Freedom in the World Status

    Free
  • Networks Restricted

    No
  • Websites Blocked

    Yes
  • Pro-government Commentators

    No
  • Users Arrested

    No