France

Free
78
100
A Obstacles to Access 23 25
B Limits on Content 30 35
C Violations of User Rights 25 40
Last Year's Score & Status
77 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

France registered a minor improvement in internet freedom during the coverage period. The government attempted to ban publication of images of on-duty police officers, though the Constitutional Council ruled the provision unconstitutional for violating privacy and free expression rights. The Council also voided the majority of the provisions in the restrictive Avia Law, which would have established a notice and takedown system for online content. Although the internet remains accessible for most of the population, website blocks and content removals are not given sufficient judicial and administrative oversight. Electronic surveillance has continued to increase in both scope and frequency.

The French political system features vibrant democratic processes and generally strong protections for civil liberties and political rights. However, due to a number of deadly terrorist attacks in recent years, successive governments have been willing to curtail constitutional protections and empower law enforcement to act in ways that impinge on personal freedoms.

header2 Key Developments, June 1, 2020 - May 31, 2021

  • In June 2020, the Constitutional Council held that many provisions in the Avia Law violated freedom of expression, drastically limiting its scope. The law required online platforms to remove user-reported hate speech content within a day, as well as content that law enforcement officials deemed terrorist-related or exploitative of children within an hour. However, after the coverage period, the European Union (EU) enacted the regulation on preventing the spread of terrorist content online, which introduces similar measures (see B2).
  • Parliament passed the Guaranteeing the Respect of Republican Principles Law, which allows the government to block mirror websites and criminalizes releasing an individual’s personally identifiable information, after the coverage period in July 2021 (see B3).
  • The Law on Global Security, which controversially outlawed the publication of images of on-duty police officers, passed in April 2021. However, the Constitutional Council struck down this controversial provision a month later (see B4, B8, and C1).
  • In December 2020, Facebook removed a network of fake accounts, which the company claimed involved individuals associated with the French military who promoted France’s regional policy goals in several Francophone countries in Africa (see B5).
  • In October 2020, the Court of Justice of the European Union (CJEU) ruled that French regulations mandating service providers to store telecommunications metadata for up to a year were illegal. In April 2021, the Council of State declared that the CJEU did not have the authority to rule on national security issues, including data retention, and suggested some minor changes to the practice (see C5 and C6).

A Obstacles to Access

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

Infrastructural limitations generally do not restrict access to the internet in France. According to Organization for Economic Co-operation and Development (OECD) data from June 2020, France has a fixed-broadband internet penetration rate of 44.6 percent, a .91 percentage point increase since 2019, and a mobile penetration rate of 93.1 percent, a 2.3 percentage point increase from 2019.1 According to the International Telecommunication Union (ITU), as of 2020, internet penetration stood at 83.3 percent.2 Despite increased reliance on internet infrastructure due to the COVID-19 crisis, there were no major issues with network capacity during the coverage period.3

Committed to providing widespread access to high-speed broadband with connection speeds of at least 30 megabits per second (Mbps), the government has been implementing an ambitious national plan to deploy fiber-optic cables, very high-speed digital subscriber line (VDSL), terrestrial, and satellite networks throughout the country by 2022, mobilizing public and private investments totaling €20 billion ($23.9 billion) over 10 years.4 In 2020, very high-speed broadband coverage accounted for 56 percent of high-speed broadband connections (16.9 million out of 30.5 million), according to a March 2021 report by the Regulatory Authority for Electronic Communications and Post (ARCEP), the telecommunications regulator.5

Reforms approved in 2015, known as the “Loi Macron,” sought to improve mobile broadband coverage by requiring mobile service providers to deploy second-generation (2G) technology for mobile networks in underserved municipalities by 2016 and ensure coverage with third- and fourth-generation (3G and 4G) technology networks by 2017.6 In January 2018, ARCEP and the government conceived a mobile “New Deal,” enacted in July 2018, to develop 4G networks by 2022. According to a December 2019 ARCEP report, between 85 and 87 percent of rural areas had 4G coverage, with a goal of 90 percent coverage by January 2022.7 The 4G networks of the three mobile service providers still cover a vast majority of the metropolitan French population (99 percent), while the fourth major provider’s 4G networks cover 98 percent of the population.8 Networks began piloting fifth-generation (5G) technology across France in 2020. The rollout, which is happening slowly, is concentrated in the largest cities including Paris, Lyon, and Nice.9

France is ranked among the top countries in the world for fixed-line broadband connection speed, with an average download speed of 199 Mbps, according to Ookla’s speed test.10 According to a June 2021 report from ARCEP, the average mobile download speed was 64 Mbps in dense areas, 59 Mbps in areas with medium density, and 31 Mbps in rural areas.11

While many cities benefit from a high broadband connection speed, the promised “universal telecommunications service” has not ensured such access for rural areas, and stakeholders have criticized the failure of Orange, the market-dominating service provider in which the government owns the majority of shares, to increase access, citing its monopolistic market position.12

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 3.003 3.003

Internet connections are relatively affordable. In 2021, the Economist Intelligence Unit’s Inclusive Internet Index ranked France 9th of 100 countries for affordability of internet connections.1 In December 2020, a provision was added to the Post and Electronic Communications Code to ensure a “universal electronic communications service” at a reasonable price.2 According to 2020 International Telecommunication Union (ITU) data, a monthly entry-level fixed-line broadband subscription cost 1.2 percent of gross national income (GNI) per capita, while a mobile-only, monthly mobile data plan cost 0.49 percent of GNI per capita.3 Both percentages increased, but remain lower than those in neighboring countries like Italy and the United Kingdom.

There are a number of Internet Exchange Points (IXPs) in France,4 which contribute to improved access and lower consumer prices.5

However, demographic disparities in internet usage persist. A map produced by ARCEP illustrates some of the regional disparities in mobile penetration, showing patchy 4G coverage in rural areas and overseas territories.6 Most at-home users have access to broadband connections, while the remaining households, usually in rural areas, must rely on dial-up or satellite services.7 The mobile “New Deal” aims to reduce these disparities, and between July 2018 and April 2020, ARCEP deployed 1,374 4G antennas in targeted areas out of a planned 5,000 antennas.8 There are no significant digital divides in terms of gender or income.

In May 2020, Copie France, the company that collects the private copyright remuneration, attempted to introduce a copyright tax on second-hand smartphones, which raised concerns that it could become more expensive for low-income individuals to buy smartphones and access the internet.9

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

There were no restrictions on connectivity reported during the coverage period. There is no central internet backbone, and ISPs are not required to lease bandwidth from a monopoly holder, as is the case in other countries. Instead, the backbone consists of several interconnected networks run by ISPs and shared through peering or transit agreements. The government does not have the legal authority to restrict the domestic internet during emergencies.

However, in March 2020, at the outset of the COVID-19 pandemic, the Minister for Digital Economy asked service providers and the general public to adapt their services and uses.1 Service providers were asked to take measures to limit their consumption, including limiting speeds and reducing the quality of video content.2

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 4.004 6.006

There are no significant business hurdles to providing access to digital technologies in France. Service providers do not need to obtain operating licenses.1 However, the use of frequencies (for mobile networks) is subject to strict licensing by ARCEP.2 Only four providers are licensed in this regard: Orange, Free, Bouygues Telecom, and Société française du radiotéléphone (SFR).3 Others, such as NRJ Mobile, make use of these providers’ networks, reselling internet and mobile services.4

Orange, Free, Bouygues Telecom, and SFR dominate both the fixed and mobile markets. Competition between these four providers is fierce, but there is little room for other players to compete.

In 2017, ARCEP announced that it would impose certain constraints on market leader Orange in an effort to open up competition for high-speed fiber-optic services among small- and medium-sized companies.5 In February 2020, ARCEP reported a 40 percent rise in investment in the broadband market in four years and welcomed the healthier competition for high-speed fiber-optic services and traditional broadband services.6 In particular, ARCEP focused on reinforcing competition in the wholesale market.7 In July 2020, Sébastien Soriano, the former president of ARCEP, expressed dissatisfaction with the state of competition in the business-to-business telecommunications market, criticizing Orange and denouncing its dominant position.8 Discussions between ARCEP, Orange, and other telecommunications providers around the cost of network maintenance continued in 2020 and 2021.9

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 4.004 4.004

The telecommunications industry is regulated by ARCEP,1 while competition is regulated by the Competition Authority and, more broadly, the European Commission (EC).2 ARCEP is an independent and impartial body.

ARCEP is governed by a seven-member panel. Three members are appointed by the president, while the National Assembly and Senate appoint two each.3 All serve six-year terms. In January 2021, member of Parliament Laure de la Raudière was nominated as the agency’s president.4 As a member state of the European Union (EU), France must ensure the independence of its telecommunications regulator. Given that the government is the main shareholder in Orange, the leading telecommunications company, the EC stated in 2011 that it would closely monitor the situation in France to ensure that European regulations were met.5

The Digital Republic Act enacted in 2016 broadened ARCEP’s mandate, granting the body investigatory and sanctioning powers to ensure compliance with the law’s principle of net neutrality.6 In July 2019, ARCEP reiterated its commitment to promote net neutrality, digital transformation, and technological innovation in France.7

B Limits on Content

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content, particularly material that is protected by international human rights standards? 5.005 6.006

The government does not generally block web content in a politically motivated manner. All major social media platforms are available.

However, France is one of the few countries that has blocked two well-known websites engaged in piracy, Sci-Hub and LibGen, which offer free access to millions of paywalled academic books, journals, and papers. Following a complaint from academic publishers Elsevier and Springer Nature, a court ordered the four major ISPs to block the two websites in April 2019.1

Since the 2015 terrorist attacks in Paris, terrorist-related content and incitements to hatred have been subject to blocking. In November 2018, a Paris court ordered nine French ISPs to block Participatory Democracy, a racist, antisemitic, and anti-LGBT+ French-language website hosted in Japan that was found to be inciting hatred. The website is affiliated with French far-right and extremist communities.2 As of April 2021, the website was accessible at a different URL.3

A decree issued in 2015 outlined administrative measures to block websites containing materials that incite or condone terrorism, as well as sites that display child abuse.4 Shortly after the decree was promulgated, five websites were blocked with no judicial or public oversight for containing terrorism-related information.5 In the ensuing years, many more websites have been blocked in France. According to a June 2021 annual report from the National Commission on Informatics and Liberty (CNIL), France’s data protection agency, the Central Office for the Fight against Crime Related to Information and Communication Technology (OCLCTIC) issued 519 blocking orders to ISPs between January 2020 and December 2020, compared to 420 between February 2019 and December 2019. Among the orders were 28 sites targeted for hosting terrorism-related information; the remaining 491 were targeted for displaying child abuse.6 The reports did not offer details on the content of the blocked websites, but they disclosed OCLCTIC decisions that have been disputed in the past. During the coverage period, CNIL disputed none of the OCLCTIC’s decisions but made seven recommendations.7

CNIL’s May 2020 report suggested that the agency may no longer oversee the blocking process, indicating that the task may be done by another independent administrative authority, due to provisions in the now-voided Avia Law (see B2). 8 Although the provision was ultimately declared void by the Constitutional Council,9 establishing an independent administrative authority to oversee blocking was still being discussed in when legislators were drafting the Guaranteeing the Respect of Republican Principles Law, but it was not included in the version of the law passed by Parliament in July 2021 (see B3 and C2).

In early 2021, the Audiovisual Council (CSA) reported that it was considering blocking some pornographic websites.10

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content, particularly material that is protected by international human rights standards? 2.002 4.004

The French government continues to actively legislate the online environment.

In May 2020, Parliament adopted the law against hateful content on the internet, known as the Avia Law.1 The law required major online platforms to consider removing content reported by users as “illegal” within 24 hours. The bill also required all websites to remove terrorist or child abuse content within one hour of notification by the OCLCTIC. In case of noncompliance, platforms could be heavily fined, up to €20 million ($23.9 million), or a maximum of 4 percent of global turnover in special cases.2 The Avia law was modeled, at least in part, on Germany’s Network Enforcement Act (NetzDG).

The scope of the Avia Law was drastically reduced by the Constitutional Council in June 2020, following an appeal from a group of senators. The Council found that most of the law’s provisions, particularly the timed removal obligations, violated freedom of expression.3 The remaining provisions simplify systems for the notification of disputed content, strengthen the prosecution of online hate speech, and create an “online hate observatory.”4

While most of the provisions never went into effect, the French Government and parliamentarian Laeticia Avia have been actively trying to reintroduce some of them. Further, recent EU regulations will also facilitate the removal of online content. In June 2021, the EU enacted the regulation on preventing the dissemination of terrorist content online, often referred to as the “terrorist regulation,” which contains an obligation of removing content under one hour, though it has yet to be adopted into French law.5 The Digital Services Act project, which the EC sent to the European Parliament in December 2020, also contains content removal provisions.

The government sometimes orders online platforms to delete or deindex content. For example, in December 2018, after a year of heated debate, a French court ordered Google to deindex search engine results related to seven illegal streaming websites for a year.6 In June 2019, the French government asked Google to delete a Google+ picture depicting two French officials as dictators. Google did not comply with this request. According to Google’s transparency report, the government issued 657 requests to remove content in 2020, invoking national security, or privacy and security and copyright violations in a majority of the cases.7

Between July and December 2020, Facebook restricted access to 81 pieces of content,8 though the company did not disclose how many content removal requests it received in total.9 Between July and December 2020, Twitter received 395 removal requests, complying with 51 percent of them.10 From January to December 2020, Microsoft received 63 content removal requests from the government, complying with 92 percent of them.11

A government decree issued in 2015 allows for the deletion or deindexing of online content related to child abuse and terrorism using an administrative procedure supervised by the CNIL.12 According to the CNIL, between January and December 2020, the OCLCTIC issued 50,448 removal requests (a 425 percent increase from the previous year’s 11,874) targeting such content, as well as 4,138 deindexing requests (compared to 5,883 the previous year).13 Content was deleted in response to 36,710 removal requests (73 percent of the total number issued), 13,079 of which related to child abuse and 2,986 of which related to terrorism. The CNIL did not dispute any decisions from the OCLCTIC in 2020.14

The right to be forgotten (RTBF) was recognized in a 2014 ruling from the Court of Justice15 and was later institutionalized throughout Europe with the implementation of the General Data Protection Regulation (GDPR) in May 2018.16 Between May 31, 2020 and May 30, 2021, Google deindexed some 278,904 URLs in France under the RTBF.17 Between January to December 2020, Microsoft deindexed 2,004 URLs under the RBTF.18 Both companies deindexed only about half of the URLs requested by users and other entities in France.

In February 2021, Facebook suspended the accounts of two right-wing politicians, Jordan Bardella and Marion Maréchal Le Pen. The suspensions only lasted a few hours, but the two politicians claimed it was censorship.19

In January 2021, Twitter was highly criticized for suspending and blocking an account that posed the question “how can we stop men from raping others?” Twitter attributed the suspension to an “error” in its moderation algorithm.20

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 3.003 4.004

Historically, authorities have been fairly transparent about what content is prohibited and the reasons behind specific content removal requests. Incitement of hatred, racism, Holocaust denial, child abuse and child sexual abuse imagery, copyright infringement, and defamation1 are illegal and may be grounds for blockings or takedowns. Article R645-1 of the criminal code outlaws the display of the emblems, uniforms, or badges of criminal organizations under penalty of a fine and can justify blockings or takedowns of such symbols when they appear online.2

In July 2021, after the coverage period, parliament passed the Guaranteeing the Respect of Republican Principles Law (see C2), often referred to as the Islamic separatism law, following an agreement between the National Assembly and the Senate.3 The law, which was initially approved by the National Assembly in February 2021, not only places broad constraints on religious freedom, but it also enables an administrative authority to block mirror websites, including websites that contain “substantially the same” content, without a review from a magistrate.4 The law includes some similar measures to the provisions of the Avia law that were voided by the Constitutional Council in June 2020 and anticipates some of the measures for removing content included in the EU’s Digital Services Act (see B2).

Notably, in December 2018, Parliament passed a law first proposed by President Emmanuel Macron that aims to combat disinformation around elections by empowering judges to order the removal of “fake news” within three months of an election.5 The proposal was rejected twice by the Senate before it was passed. The law places a significant strain on judges, who will have 48 hours to decide whether a website is spreading false news, following a referral by a public prosecutor, political party, or interested individual. Under the law, social media platforms are also required to disclose who is paying for sponsored advertisements during electoral campaigns.6 Commentators have expressed concern that the law could be used as a political tool.7

A set of decrees issued in 2015 outlined administrative measures to block websites containing materials that incite or condone terrorism, as well as sites that display child sexual abuse images (see B1). The decree implemented Article 6-1 of the 2004 Law on Confidence in the Digital Economy (LCEN), as well as Article 12 of the 2014 antiterrorism law.8

The OCLCTIC is responsible for maintaining a denylist of sites that contain prohibited content and must review the list every four months to ensure that such sites continue to contravene French law. The OCLCTIC can ask editors or hosts to remove the offending content, and after a 24-hour period, it can order ISPs to block sites that do not comply.9 Users attempting to access sites on a denylist are redirected to a website from the Ministry of the Interior providing avenues for appeal. Another decree also allows for the deletion or deindexing of online content from search results using an administrative procedure supervised by the CNIL (see B2).10 Under this decree, the OCLCTIC submits requests to search engines, which then have 48 hours to comply.11 The OCLCTIC is responsible for reevaluating deindexed websites every four months and requesting the reindexing of websites when the incriminating content has been removed.

The lack of judicial oversight in the blocking of websites that allegedly incite or condone terrorism remains a concern. The procedures outlined above are supervised by the CNIL. As any other administrative authority, the CNIL can also refer requests to the administrative court system, should it object to and dispute any action taken or order given by the OCLCTIC. In May 2019, a CNIL official asserted that the body lacks the technical means and human resources to efficiently supervise the OCLCTIC, which remains a concern.12 Some commentators have lamented that, while the CNIL was founded to protect internet freedom, it now oversees restrictions of the online space.13

Legal debates over the RTBF have also escalated in recent years. The CNIL has been battling with Google to enforce the RTBF ruling across the world, including any extension of the company’s search engine (for example, Google.com and Google.ca).14 Google raised concerns that the move would set a dangerous precedent for authoritarian governments, who could also request that Google apply national laws extraterritorially.15 In 2016, Google was fined $112,000 by the CNIL for not complying with demands to remove results across its global domains.16 Google appealed to France’s Council of State, which in 2017 decided to refer the matter to the Court of Justice of the European Union (CJEU).17 The French Council of State rescinded the 2016 penalty in March 2020, following a September 2019 judgment from the CJEU ruling that decided Google was not required to scrub search results worldwide.18 However, this right does not require any transparency from search engines, thus allowing Qwant, a France-based search engine, to conceal their delistings.

A 2016 ruling by a Paris court established that Facebook could be sued in France for removing the account of a French user who posted an image of a Gustave Courbet painting of a naked woman. Facebook had argued that cases concerning its terms and conditions could only be heard by a court in the United States. The case was finally judged in March 2018; a French court dismissed the user’s suit. The user appealed this first decision in April 2018 and withdrew his appeal in August 2019 after a settlement with Facebook 19

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 4.004 4.004

Online self-censorship is minimal. However, a law aimed at countering online hate speech might lead to increased government oversight of internet users, raising concerns that it could potentially cause greater self-censorship (see B2). In January 2019, President Macron said, “We should move progressively toward the end of anonymity” online.1 This sentiment enters the public debate every few months. For instance, after the murder of Samuel Paty, a middle-school teacher, in October 2020, a group of politicians once again demanded the end of anonymity online.2

Article 24 of the Law on Global Security, which was adopted by Parliament in April 2021, would have criminalized the publication of images of on-duty police officers; violators would have been liable to face up to five years in prison or a €75,000 ($90,000) fine.3 The United Nations (UN) Human Rights Council has raised concerned regarding the content of the bill.4 However, In May 2021, the Constitutional Council voided a number of the bill’s provisions, including the controversial ban relating to publication of images of on-duty police officers.5

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 3.003 4.004

During the coverage period, individuals linked to the French military proactively manipulated content in a number of African countries.1

In December 2020, Facebook reported detecting a network of fake accounts that posed as residents of Francophone counties in Africa, including Algeria, Burkina Faso, Chad, Cote d’Ivoire, the Central African Republic, Mali, and Niger, to spread messages that aligned with France’s regional policies. The leaders of the campaign, which Facebook linked to individuals associated with the French military, posted in both Arabic and French and were active on both Facebook and Instagram. Facebook noticed that the French-linked fake accounts interacted with fake accounts linked to the Russian government that were active in the same countries. In one case, the accounts linked to the French disinformation campaign suggested that Russia meddled in the election in the Central African Republic.2 French officials “raised doubt” about the coordinated inauthentic behavior report, though the government did not deny the findings.3

Content manipulation remains a problem outside of politics. During the COVID-19 pandemic, false reports and misinformation about the virus spread online,4 as did conspiracy theories propagated by far-right and extremist political parties.5 In March 2019, false reports of child abductions by Romani people spread on social networks, notably Facebook and Snapchat, triggering real-world violence against Romani people living in the suburbs of Paris.6

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

France has a long history of antipiracy laws and regulatory constraints on online content publication. However, users face few obstacles to publishing online.

An antipiracy law administered by the High Authority for the Dissemination of Works and the Protection of Rights on the Internet (HADOPI) was originally passed in 20091 and was supplemented by a second law also passed that year.2 HADOPI functions by employing a graduated response with copyright infringers, starting with an email warning for the first offense, followed by a registered letter if a second offense occurs within six months. If a third offense occurs within a year of the registered letter, the case can be referred to a court, and the offender may receive a fine.3 In 2019, HADOPI filed more than 1,748 referrals to prosecutors (compared to 1,045 in 2018). Most fines ranged from €50 to €1,500 ($60 to $1,790).4

In December 2020, the parliament empowered the government to adopt a new copyright proposal, which increases HADOPI’s power by implementing the newly passed EU Copyright Directive (see B3), and it was enacted in May 2021.5 The November 2019 draft proposal would, inter alia, ban websites that host pirated content and promote the use of measures similar to YouTube’s Content ID on other social media platforms, in order to automatically detect and remove copyright violations.6 Previous drafts were criticized by freedom of speech activists who fear that some measures, like promoting the use of Content ID, would limit the ability of content creators to benefit from the fair use of copyrighted materials.7 The May 2021 law contains an ad hoc liability system for platforms hosting copyrighted content.8

The principle of net neutrality is enshrined in the law. In November 2018, a joint study published by ARCEP and Northeastern University indicated that net neutrality was better respected in France than in the rest of the EU.9

In July 2021, after the coverage period, Autorité de la concurrence, France’s competition authority, fined Google €500 million ($598 million) for failing to negotiate licensing fees “in good faith” with French news outlets, which it was mandated to do in April 2020.10 In January 2021, Google had reached a preliminary agreement with a group of media outlets, but it reportedly refused to engage in the negotiations. The ruling also stated that Google must agree to present an offer to the media outlets within two months.11

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity and reliability? 4.004 4.004

France is home to a highly diverse online media environment. There are no restrictions on access to independent online media. There is no censorship of platforms providing content produced by different ethnic, religious, or social groups, including LGBT+ people. However, commentators have observed increased online harassment of LGBT+ users (see C7).1

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

There are no restrictions on digital mobilization in France. The state and other actors do not block online organizing tools and collaboration websites.

A number of digital rights and advocacy groups, such as La Quadrature du Net (“Squaring the Net”) are active and play a significant role in protesting the government’s recent moves to expand censorship and surveillance measures without judicial oversight.1

The COVID-19 pandemic temporarily changed the landscape of activism in France. Various strike movements were diminished by the legislative and administrative measures related to COVID-19 starting in March 2020. A few protests moved online, including a protest on May 1, 2020, International Workers’ Day, which demanded improved rights for workers.2 In July 2020, the French Council of State suspended a decree forbidding protests of more than 10 people, to ensure the right to freedom of assembly.3

In November 2020, an estimated 500,000 people came together to protest Article 24 of the Global Security Bill, which would have prevented people from posting images of on-duty police officers online.4 A number of measures in the law, including the criminalization of filming on-duty police officers, were eventually voided by the Constitutional Council (see B8 and C2).

In April 2021, young protestors organized on social media and congregated in front of the National Assembly to demand the passage of the Climate Bill, an ambitious environmental law, defying an administrative order forbidding them to protest in front of the building. They judicially contested the order, which the Paris administrative court suspended.5

C Violations of User Rights

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 5.005 6.006

Score Change: The score improved from 4 to 5 because the environment for freedom of expression has gradually improved since the 2015 terror attacks, and the Constitutional Council voided provisions in two laws that unduly restricted free expression and access to information.

The French constitution expressly protects press freedom and access to information and guarantees freedom of speech and the protection of journalists.1

France has an independent judiciary, and the rule of law generally prevails in court proceedings. The Constitutional Council has also taken decisions that protect free expression and access to information in practice (see B2 and C2).

However, the government’s response to the 2015 terror attacks have curtailed human rights online in practice. The European Convention on Human Rights, to which France is a signatory, provides for freedom of expression, subject to certain restrictions considered “necessary in a democratic society.”2 Since the Charlie Hebdo attack and November 2015 terrorist attacks in Paris, the government has adopted various laws, decrees and administrative provisions limiting fundamental rights justifying it on the ground of public safety.3

Broad new powers under the state of emergency proclaimed in 2015 raised concerns among human rights and digital rights activists.4 While then prime minister Manuel Valls declared that it was a “short term response,”5 the state of emergency was subsequently extended six times until November 2017.6 The counterterrorism law that came into effect in 2017 has also raised concerns among civil rights campaigners for giving prefects and security forces wide-ranging powers with limited judicial oversight. It also introduced a new legal framework for surveillance of wireless communications (see C5).7

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities, particularly those that are protected under international human rights standards? 2.002 4.004

Several laws assign criminal or civil penalties for potentially legitimate online activities. In particular, myriad counterterrorism laws threaten to punish users for such activities. Measures to address terrorism were already in place prior to the 2015–17 state of emergency. The counterterrorism law passed in 2014 penalizes online speech deemed to sympathize with terrorist groups or acts with up to seven years in prison and a €100,000 ($120,000) fine. Speech that incites terrorism is also penalized. Penalties for online offenses are harsher than offline offenses, which are punishable by up to five years in prison and a €75,000 ($90,000) fine.1

Another counterterrorism and organized crime law enacted in 2016 imposes up to two years in prison or a €30,000 ($36,000) fine for frequently visiting sites that glorify or incite terrorist acts, unless these visits are in “good faith,” such as for conducting research.2 The Constitutional Council rejected this law in 2017, arguing that the notion of “good faith” was unclear and that the law was not “necessary, appropriate, and proportionate.”3 An amended version was reintroduced as part of a public security law—imposing prison sentences on users who also “manifest adherence” to the ideology expressed at the visited sites4 —but was once again struck down by the Constitutional Court in December 2017.5

Defamation can be a criminal offense in France, punishable by fines or, in circumstances such as “defamation directed against a class of people based on their race, ethnicity, religion, sex, sexual orientation or handicap,” prison time.6

The Guaranteeing the Respect of Republican Principles Law, which was passed in July 2021, after the coverage period, criminalizes the practice of doxing, with higher penalties when these offenses are committed against public officials. Violators can face up to five years in prison or a €75,000 ($90,000) fine, which raises concerns that this law could be used to suppress legitimate criticism of public officials.7

Additionally, the law on global security was passed in April 2021,8 but the provision that criminalized the publication of images of on-duty police officers was ultimately declared unconstitutional the next month (see B4 and B8).9

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities, particularly those that are protected under international human rights standards? 5.005 6.006

While no citizens faced politically motivated arrests or prosecutions in retaliation for online activities, users have been convicted of inciting or sympathizing with terrorism online. The law’s broad use of the terms “inciting” and “glorifying” terrorism risks targeting speech that has tenuous connections to terrorist acts.

In February 2020, a court convicted an elected member of the Brittany regional legislature of sympathizing with terrorist acts. The official, who had previously been expelled from the far-right National Front party, posted an Islamophobic message on Twitter following the attack of a far-right activist in Christchurch, New Zealand on two mosques. She was sentenced to one year’s suspended sentence and three years of ineligibility to contest elections.1

In June 2019, Marine Le Pen, leader of the far-right National Rally party, was ordered to stand trial by a correctional court for sharing videos of Islamic State (IS) terrorists beheading a journalist on Twitter.2

A growing number of individuals, including minors,3 are also investigated, and given fines and prison sentences for “glorifying” terrorism.4

Penalties for threatening state officials are applied to online activities. In May 2019, a man was fined €500 ($600) for sending President Macron a death threat on Facebook.5 In January 2017, a court sentenced a 42-year-old homeless man to three months in jail for a Twitter post that threatened parliamentarian Eric Ciotti.6

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 2.002 4.004

Users are not prohibited from using encryption services to protect their communications, including through tools such as Tor, although mobile users must provide identification when purchasing a SIM card, potentially reducing anonymity for mobile communications.1 There are no laws requiring providers of encryption services to install backdoors, but providers are required to turn over decryption keys to the investigating authorities.2 In October 2020, the Court of Cassation ruled that any person who is asked, even by a police officer, to turn over decryption keys should comply with the request or face incrimination, overturning a June 2019 ruling.3 The case concerned a drug dealer who was using encryption services, refused to unlock his phone during his arrest, and was charged for this refusal.4

In October 2020, following the murder of middle school teacher Samuel Paty, a group of politicians argued to restrict online anonymity (see B4).5

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

Surveillance has escalated in recent years, including through the enactment of a new surveillance law in 2015, which was passed in the wake of the attack on Charlie Hebdo.

The 2015 Intelligence Law allows intelligence agencies to conduct electronic surveillance without a court order.1 An amendment passed in 2016 authorized real-time collection of metadata not only from individuals “identified as a terrorist threat,” but also those “likely to be related” to a terrorist threat and those who belong to the “entourage” of the individuals concerned.2

The Constitutional Council declared three of the law’s provisions unconstitutional in 2015, including one that would have allowed the interception of all international electronic communications. However, an amendment enabling surveillance of electronic communications sent to or received from abroad was adopted later in 2015, shortly after the Paris attacks, for the purposes of “defending and promoting the fundamental interests of the country.”3 In 2016, the Constitutional Council struck down part of the Intelligence Law related to the monitoring of hertz wave communications, ruling it “disproportionate.”4 Article 15 of the 2017 counterterrorism law reintroduced a legal regime for monitoring wireless communications but limited surveillance to certain devices such as walkie-talkies and does not encompass Wi-Fi networks.5

Following an October 2020 CJEU decision confirming the ban on indiscriminate metadata collection and retention,6 the French government asked the Council of State to ignore the four EU rulings on that issue, claiming France’s national sovereignty.7 In April 2021, The Council of State ruled that the current data retention regime was justified due to threats to national security, stipulating that the government should regularly reevaluate whether the security situation justified the continued retention of metadata (see C6).8

The COVID-19 pandemic and ensuing national lockdown raised the specter of the monitoring of confined and sick people without their consent. In March 2020, Orange shared statistics on mobile users’ travels out of the Paris region area in response to a government request, and the telecommunications industry invited legislation to regulate such data-sharing (see C6).9 In March 2021, a decree allowed the use of closed-circuit television (CCTV) in public transportation to generate statistics concerning the number of people wearing masks on Paris’s public transportation.10

In April 2020, the government announced the development of a Bluetooth contact-tracing app that deploys pseudonymized identifiers and relies on centralized data storage.11 The release of the app, named StopCovid, was originally intended to coincide with the deconfinement measures of May 11, but was released on June 2, after one month of development and parliamentary approval on May 27.12 The CNIL released opinions on the principles of the app on April 2613 and May 26,14 ultimately approving the app while noting that its concerns had been addressed.15 Critics in civil society and Parliament raised concerns about anonymity, the effectiveness of the tool, the potential for its discriminatory effects, and basic interoperability issues (as of March 2021, it lacked any form of interoperability with neighboring countries).16 As of March 2021, 13 million French citizens had downloaded the app.17 18 In April 2021, the government added a feature to store health information (Green Pass), including a proof of vaccine that French citizens can present to enter restaurants and bars, under new measures introduced after the coverage period. The newer features were criticized for potentially exposing private health information, including information related to the vaccine.19

The state of emergency imposed between 2015 and 2017 included provisions on electronic searches20 and empowered the minister of the interior to take “any measure to ensure the interruption of any online public communication service that incites the commission of terrorist acts or glorifies them.”21 In June 2021, after the coverage period, the National Assembly passed the bill relating to the prevention of the acts of terrorism and intelligence, which would allow the government to use an increasing number of algorithms to identify individuals who had visited extremist websites.22

In 2019, an amendment that was passed as part of a routine military spending bill (the Military Planning Law, or LPM) extended the state’s surveillance capabilities. To be implemented from 2019 to 2025, the amendment expands access to data collected outside France’s borders by providing domestic antiterrorism investigators with information obtained by the General Directorate for External Security, France’s foreign intelligence agency.23 According to Article 37 of the LPM, it will be possible to perform within intercepted communications “data spot checks for the sole purpose of detecting a threat to the fundamental interests of the nation” on any individual or entity that can be traced to French territory.24 Digital rights groups have criticized this expansion of surveillance that previously only affected French citizens living abroad.25

The LPM covering 2014–19 extended administrative access to user data by enabling designated officials to request such data from ISPs for “national security” reasons, to protect France’s “scientific and economical potential,” and to prevent “terrorism” or “criminality.”26 The office of the prime minister authorizes surveillance, and the National Commission for Security Interception (CNCIS, later renamed the National Intelligence Control Commission, or CNCTR) must be informed within 48 hours in order to approve it.27 Early critics pointed out that the CNCIS lacked appropriate control mechanisms and independence from potential political interference, given that the body was comprised of only three politicians in 2014.28 While the government argued that the law provided an improved legal framework for practices that had been in place for years,29 it finally replied to these criticisms at the end of 2015 by enlarging its composition from three members to nine, making room for judges.30

A law related to the fight against organized crime and terrorism, enacted in 2016, also elicited strong reactions from the public.31 The law notably expanded the range of special investigation methods available to prosecutors and investigating judges, which were previously reserved for intelligence services. These include bugging private locations, using phone eavesdropping devices such as international mobile subscriber identity–catchers (IMSI–catchers), and conducting nighttime searches.32 Relatedly, Article 23 of the Law on Guidelines and Programming for the Performance of Internal Security (LOPPSI 2), adopted in 2011, granted the police with the authority to install malware—such as keystroke logging software and Trojan horses—on suspects’ computers in the course of counterterrorism investigations, although a court order must first be obtained.33

Since the GDPR came into force in 2018, individuals are deemed to have better rights to control the use of their personal data. Companies face hefty fines if they fail to comply (see C6).34

The Global Security Law (see B4, B8, and C2), initially passed in April 2021, contained provisions allowing the police to use drones to film protests.35 However, the Constitutional Council also declared this article unconstitutional for undermining protection of people’s privacy.36

C6 1.00-6.00 pts0-6 pts
Does monitoring and collection of user data by service providers and other technology companies infringe on users’ right to privacy? 3.003 6.006

Service providers are required to aid the government in monitoring their users’ communications under certain circumstances. For instance, they must retain user metadata for criminal investigations.1 Although the CJEU ruled against this practice, in April 2021, the Council of State considered that the retention practices were justified (see C5).2

The 2015 Intelligence Law requires ISPs to install so-called “black boxes,” algorithms that analyze users’ metadata for “suspicious” behavior in real time.3 The first black box was set in 20174 and two more were added in 2018.5 Related to this increase in surveillance capabilities, 12,574 “security interceptions” were undertaken in 2019—an increase of 19 percent from 2018. Real-time geolocation tracking in the context of individual surveillance ostensibly for national security purposes increased by 46.4 percent from 2018 to 2019 (from 5,191 to 7,601). The number of individuals subject to this surveillance slightly increased from 22,038 to 22,210.6

In March 2020, Orange shared public statistics on mobile users’ travels out of the Paris region area in response to a government request, in order to aid contact-tracing efforts of people with symptoms of COVID-19 (see C5).7 The telecommunications industry then invited the government to adopt legislation in case more advanced measures were needed. The government created a consultation committee in March 2020 to assess the use of geolocation data to surveil the spread of the COVID-19 pandemic,8 raising concerns among privacy activists that the movements of every patient or confined person would be mapped without their consent.9

In June 2019, the Ministry of the Interior proposed an intelligence law in order to extend the use of black boxes, with the aim of improving automation, prolonging data collection, and taking into account new technologies, such as 5G networks.10

In 2020 and 2021, the government regularly enforced the data protections enshrined in the EU’s GDPR and e-privacy directive, and began enforcing competition measures that concern people’s personal data. In November 2020, the CNIL fined supermarket chain Carrefour €3 million ($3.6 million) because they failed to provide adequate information on their loyalty reward and credit card programs, among other issues.11 In December 2020, the CNIL fined Google €100 million ($120 million) and Amazon €35 million ($41.9 million) for violating the e-privacy directive.12 In June 2021, after the coverage period, the Competition Authority fined Google €220 million euros ($263 million) for its dominant position in the ad-tech market, which compelled Google to adopt new interoperability measures.13

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in relation to their online activities? 4.004 5.005

There were no reported physical attacks against journalists or ordinary users during the coverage period.

In February 2020, Benjamin Griveaux, a candidate running for mayor of Paris, withdrew from the race after a video of him engaging in a sex act, self-recorded for a lover who was not his spouse, was leaked online.1

In February 2019, a group of mostly male journalists were accused of online harassment against women, obese people, and LGBT+ people. Though they carried out harassment campaigns primarily on Twitter, they coordinated their activities in a private Facebook group called the “League of LOL.”2

In April 2019, journalists from the investigative online outlet Disclose were summoned to the General Directorate for Internal Security (DGSI), France’s domestic intelligence agency, after publishing confidential documents about the export of weapons later used by Saudi Arabia and the United Arab Emirates (UAE) in the war in Yemen.3

Online harassment of LGBT+ people increased during the coverage period. The nongovernmental organization (NGO) called SOS Homophobia highlighted in its 2020 report an increase of anti-LGBT+ content on social networks, from 383 cases reported in 2018, to 596 in 2019.4 In January 2019, two associations defending LGBT+ rights filed 213 complaints related to insults, incitements to hatred, and calls to murder LGBT+ users on social networks.5 Also in January 2019, YouTuber and LGBT+ advocate Bilal Hassani filed a lawsuit asserting that he was the victim of a large-scale cyberbullying campaign.6

In February 2020, a teenage girl received rape and death threats after she posted a video calling Islam a “religion of hate.”7 In March 2021, Twitter suspended her account but later claimed it was an error, and she was able to regain access.8

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 2.002 3.003

Several government-affiliated websites experienced cyberattacks during the coverage period. Businesses routinely experience hacking attempts.

Medical institutions, including hospitals, routinely face ransomware attacks1 and breaches of patients’ personal data in light of the COVID-19 pandemic. For instance, in February 2021, Zataz, a cybersecurity blog, and Liberation, a newspaper, revealed a data breach that affected 30 medical laboratories in northwestern France and exposed the data of 500,000 patients.2 Additionally, in March 2020, l’Assistance publique-Hôpitaux de Paris, which manages 39 hospitals in Paris and the surrounding region, experienced a distributed denial-of-service (DDoS) attack, leading the hospital network to close temporarily its internet access for a day. 3

In June 2020, the national France Télévision group experienced a malware attack, though it had no effect on broadcasting.4

In June 2019, the government’s tax collection website went down on the last day for fiscal declarations. The National Cybersecurity Agency (ANSSI) investigated the case, claiming the attack likely originated from abroad.5 Reports noted that 2,000 fiscal declarations were altered by hackers.6

In 2020, the ANSSI received 2,287 reports of cyberattack events, including 7 major incidents.7

According to the Global State of Information Security Survey 2018, French business-losses related to cyberattacks grew by 50 percent in 2017, with companies losing an average of €2 million ($2.4 million) and more than 4,550 cybersecurity incidents were recorded by French companies during that year.8 Companies and institutions also frequently experience ransomware attacks, which were at times targeted, with cybercriminals manually gaining entry into a network to encrypt data; the petroleum company Picoty SA suffered such an attack in May 2019.9 Automated viruses using ransomware from the black market have also been used, injected via phishing schemes. A public hospital’s network was affected in this manner in May 2019.10

During the 2017 presidential campaign, Macron’s campaign team announced that they were the “victim of a massive and coordinated hacking attack” after thousands of leaked emails and documents were dumped on the internet in a last minute effort to destabilize the race.11 Macron had previously confirmed being the target of phishing operations by a group of hackers and denounced the “interference.”12 Later, a Le Monde investigation found that the cyberattack was directed by a US-based neo-Nazi group.13 Observers noted that there was no real police investigation into the leaks.14 After Macron was elected, the government did not follow up on the investigation the cyberattack’s origins.

On France

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    89 100 free
  • Internet Freedom Score

    76 100 free
  • Freedom in the World Status

    Free
  • Networks Restricted

    No
  • Websites Blocked

    No
  • Pro-government Commentators

    No
  • Users Arrested

    No