close
close

“Orwellian”: EU strives to mass analyze private messages on WhatsApp and Signal | Technology

“Orwellian”: EU strives to mass analyze private messages on WhatsApp and Signal |  Technology

The European Union is considering controversial proposals to mass scan private communications on encrypted messaging apps for child pornography.

Under the proposed legislation, photos, videos and URLs sent on popular apps such as WhatsApp and Signal would be analyzed by an artificial intelligence-based algorithm against a government database of known abusive material.

The EU Council, one of the bloc’s two legislative bodies, is due to vote on the legislation, known as Chat Control 2.0, on Thursday.

If adopted by the Council, which represents the governments of the bloc’s 27 member states, the proposals will move on to the next legislative phase and negotiations on the exact terms of the law.

While European officials have claimed that Chat Control 2.0 would help prevent child sexual exploitation, encrypted messaging platforms and privacy advocates have fiercely opposed the proposals, comparing them to the 1984 mass surveillance of George Orwell.

Why are EU plans so controversial?

Critics claim that Chat Control 2.0 is incompatible with end-to-end encryption, which ensures that messages can only be read by the sender and intended recipient.

While the proposed “download moderation” regime would analyze messages before they are sent, critics have called the measures a “backdoor” by another name that would leave everyone’s communications vulnerable to potential hacking or third-party interference.

“We can call it a backdoor, a front door, or ‘download moderation.’ But whatever we call it, each of these approaches creates a vulnerability that can be exploited by hackers and hostile nation states, removing the protection of unbreakable mathematics and putting a high-value vulnerability in its place. , Meredith Whittaker, President of Signal. , said this week in a statement.

Opponents also say the proposals would give enormous power to private companies, many of which are based in the United States, to engage in mass surveillance of European citizens.

Once a backdoor exists, it could be used to search for much more than child pornography, according to Matthew Green, an expert in applied cryptography at Johns Hopkins University.

“People think Chat Control is about specific crimes. No, that’s not what’s at stake. What’s in the works is an architectural decision regarding the operation of private email systems: if adopted, by law these systems will be hardwired for surveillance massive. This can be used for any purpose,” Green said in an article on X.

MEP Patrick Breyer of Germany’s Pirate Party likened the proposals to adding government spyware to all EU devices.

“We are on the brink of a surveillance regime as extreme as we have seen anywhere else in the free world. Even Russia and China have failed to implement bugs in our pocket as the EU wants,” Breyer said in a statement.

Who supports the law?

Proposals to mass scan private communications for child pornography were first tabled by European Home Affairs Commissioner Ylva Johansson, who is Swedish, in 2022.

Belgium, the current council leader, proposed the latest version of the legislation as a compromise after more invasive proposals were rebuffed by the European Parliament.

In the latest iteration, analytics would be limited to photos, videos, and URLs and users would have to consent to the analytics.

Anyone without consent would not be able to download or share photos and videos.

Supporters say the proposals are necessary to combat the scourge of child exploitation, which officials say is facilitated by encrypted platforms and the emergence of AI-based image-generating software.

In 2022, the US National Center for Missing and Exploited Children said that 68% of the record 32 million cases of child exploitation material reported by service providers came from “chat, messaging or mail services electronics” within the EU.

The UK-based Internet Watch Foundation reported similar findings, identifying the EU as the source of two-thirds of abusive content.

Law enforcement and intelligence agencies have often expressed concern that criminals are using encrypted messaging apps to avoid detection.

Both Telegram and Signal have been used by armed groups ranging from ISIL (ISIS) to the Oath Keepers.

Intelligence agencies, the military, police and some EU ministries would be exempt from the measures, according to leaked documents obtained by French media organization Contexte.

Who opposes the law?

Among EU member states, only Germany, Luxembourg, the Netherlands, Austria and Poland have taken a clear stance against the proposals, according to Breyer, while Italy, Finland, Sweden , Greece and Portugal, among others, have not yet clearly expressed their position. .

MEPs from countries including Germany, Luxembourg, the Netherlands and Austria have also expressed concerns, with some of them saying surveillance should only be directed at specific individuals based on probable cause determined by a judge.

In November, the European Parliament, which must approve most EU laws, voted against “indiscriminate control of discussions” in favor of targeted surveillance.

Tech companies and digital rights groups opposed to the proposals include Mozilla, Signal, Proton, the Electronic Frontier Foundation, European Digital Rights, the Internet Freedom Foundation and the Irish Council for Civil Liberties.

US National Security Agency (NSA) whistleblower Edward Snowden on Wednesday called the proposals a “terrifying measure of mass surveillance”.

How would Chat Control 2.0 work in practice?

Even if Chat Control 2.0 advances, experts say the current Belgian-backed version of the law would be very difficult, if not impossible, to enforce with end-to-end encryption.

In the United Kingdom, which passed a similarly themed online security bill, the government admitted that technology does not yet exist to analyze encrypted messages without compromising overall security.

Tech platforms such as Signal and WhatsApp, which had threatened to pull out of the UK, saw this as a partial victory.

Critics also say targeting messaging apps will be ineffective in stopping child pornography given the existence of private networks and the dark web.

AI-based algorithms have also shown themselves to be prone to making errors, increasing the possibility of innocent people being reported to law enforcement.

The New York Times reported in 2022 that Google’s AI tool for detecting abusive content falsely flagged a stay-at-home father in San Francisco after he sent a photo of his son’s penis to the doctor, leading to a police investigation and the termination of his contract. Google accounts.