Edition 1: End-to-End Encryption and Government Demands
When it comes to end-to-end encryption, what is it that governments actually want?
Reporting on end-to-end encryption often feels like an unending Groundhog Day. It is a regurgitation of similar government demands, similar draft laws across the world, similar reasons — all of which are then contested by a similar mix of social media companies, civil society organisations, and privacy and cybersecurity experts.
But when you set out to trace (see what I did there?) the progression (or regression, if you will) over a few years, it appears that governments seem to be increasingly getting their way and are passing anti-E2EE laws, at times defying science, logic and maths.
Last week, on September 19, the contentious Online Safety Bill cleared the UK’s House of Lords and now just awaits royal assent before it becomes a law. In 2022, the EU proposed its Child Sexual Abuse Regulation (derisively called “Chat Control” by its critics) to tackle CSAM online. US’ proposed EARNT IT Act and STOP CSAM Act seem to have been ebbing and flowing since their respective introductions in 2020 and April 2023. Australia has the controversial Assistance and Access Act, 2018.
And obviously, who can forget my favourite (/s) law from India — Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021?
Pretty much every online service that offers E2EE has decried such moves. This includes the likes of WhatsApp and its parent company Meta (formerly Facebook), Signal, Element, Tutanota, Proton, and many others. WhatsApp and its parent company Meta (then Facebook) have actually sued the Indian government over the traceability requirement.
But why and how are governments going after E2E encryption?
There are two facets to this answer — the reasons that are publicly stated and the reasons that are gleaned from how governments use this data to undermine fundamental rights and enable mass surveillance.
The publicly stated reasons are usually stated in courts and via open letters to private companies. Seven of the most powerful countries in the world — the US, the UK, Canada, Australia, New Zealand, India and Japan — (in)famously released an international statement in 2019 asking companies to build backdoors to encrypted communications. This statement was subsequently also signed by Singapore, Georgia, Ecuador and Jordan.
The US, the UK and Australia have also repeatedly “requested” Facebook to not implement E2EE across its messaging platforms, a move that Mark Zuckerberg had announced in March 2019 which has still not been implemented. When the first of such letters was sent (and subsequently soundly rejected by Facebook), I had remarked on the irony of three members of the Five Eyes intelligence coalition, arguably the most powerful such coalition in the world, sending an open letter to a private company:
It is to be noted that an open letter has historically been a tool for private individuals and groups to make their concerns publicly heard and are usually addressed to the government and/or editors of major publications. The fact that the governments of USA, UK and Australia, with their elaborate state machinery and communication platforms have signed an open letter to the CEO of a private company is ironic, to say the least.
Such statements, that call for backdoors for law enforcement agencies, always prop up the good ol’sceptre of terrorism, violent extremism and the need to maintain law order. When it comes to the former, the tide seems to have turned a bit, especially since the Snowden revelations. The goalpost has now shifted to protecting children online from paedophiles and taking down child sexual abuse material (CSAM), a reason that evokes immediate and unconditional public sympathy.
The clamour from the governments has only grown louder.
Now there is a slew of notified and proposed laws across the world that effectively break E2EE. But not all laws demand the same thing but all lead to the same result — breaking E2EE, technically and/or in spirit.
So what are the different asks?
The legal requirements across the world would require messaging services to implement technical solutions at different points in the communication cycle. I have tried to enumerate those according to when they would need to be implemented:
1. Before sending/encrypting the message or client-side scanning:
Laws like UK’s Online Safety Bill seek to identify, take down, and “prevent” terrorism and CSAM from being communicated on platforms that receive notices from the regulator. Services like WhatsApp and Signal would have to prevent such content from being communicated in the first place. To do that, WhatsApp and Signal would have to implement a client-side scanning mechanism to scan content on users’ devices before it is even encrypted.
This means implementing a mechanism that would compare any entered data against a database of hashes of existing and known problematic content. This can either be done on the device without uploading the content to the internet but that means that the database of hashes would have to reside on each and every individual device for comparison. Or it would be compared to a database online and after being vetted by such a database, it would be allowed to be sent.
Apple had to dump its plans to introduce client-side scanning for iCloud photos after it concluded that such a plan as it “opens the door for bulk surveillance”. The plan had proposed that when Apple users attempted to upload photos to iCloud, they would first be scanned (on their devices) for CSAM by comparing hashes against a database of known CSAM, and only after receiving a “safety voucher” be uploaded to iCloud.
2. Message in transit:
For messages in transit, there are two different kinds of asks:
A. Identification/Scanning
Practically every regulation dealing with online content requires the platforms to “identify” or “scan” for problematic content, usually CSAM. Usually, proposals for such regulation are followed by media statements by governments about how they are not interested in all the content, but only in the problematic content. Two assumptions are made in such drafting: first, there are no circumstances in which innocent content can be “mistaken” for CSAM and thus the entire process can be automated. Second, that scanning only for CSAM is possible.
The first assumption can be debunked by a simple statement: algorithms are stupid and do not understand context. And it was visible when, in 2021, Google automatically blocked a father’s account in San Francisco and reported him to the local police. Why? Because he had shot videos of his toddler son’s infection in intimate areas to share with his son’s doctor during the pandemic.
When it comes to the second assumption, scanning, by the very nature of activity, means that all content will need to be scanned. If it is scanned before encryption, it client-side scanning. If it is scanned after encryption, it paves way for traceability and building large databases of hashes (point 3).
Although the UK’s OSB does not mandate removal of E2E encryption, it would de facto mean breaking it because to comply with notices from Ofcom (the regulator), messaging apps such as WhatsApp and Signal would have to scan all messages that are sent on their platform to identify and take down terrorist and CSEA content.
EU’s proposed Child Sexual Abuse Regulation requires all “hosting services and interpersonal communication service providers” with a “detection order” to scan for dissemination of any known and new CSAM, and solicitation of children. “Dissemination” here means message in transit. On detection, the service provider is mandated to submit a report that must contain all content data (text, images, video) and other data related to potential child sexual abuse, if it is known CSAM or new CSAM, location data related to the potential child sexual abuse. This means that service providers will have to scan all sent messages for CSAM.
B. Decryption/Interception
I have combined decryption and interception under the umbrella of “message in transit” because when you have access to a decrypted/unlocked end device, the message is already decrypted after receipt.
Here, the government ask is to have the ability to monitor conversations, usually through a backdoor to the service providers’ servers. Following interception, they also want the ability to decrypt those messages on the servers using a decryption key. Decryption also ties in with the demand to be able to decrypt not just current messages, but also past and future messages (real-time interception and decryption à la BlackBerry).
India’s proposed Telecommunications Bill 2022 combines these two demands in one go. It seeks to allow the government or a specially authorised officer to order the “proscription, interception, detainment or disclosure” of any messages for national security, and law and order purposes.
Given the instantaneous nature of E2EE communication today, there are very few circumstances under which messages/data (not metadata) is stored on the service’s server. For instance, it is only when a message is undelivered on WhatsApp that it is stored on the WhatsApp server for 30 days before being deleted forever. Even if law enforcements agencies got backdoor access to this server, they would not be able to decrypt the undelivered message or force WhatsApp to do it because the decryption key is not issued by a central or certifying authority but uniquely generated for every message session between a unique pair of sender and recipient. This decryption key is available only on the devices of the sender and the receiver and becomes invalid after every message session.
4. After receipt of message/decryption of message for intended recipient or traceability, getting to the first originator:
This idea is that on the receipt of a message, especially a message that has been forwarded multiple times, after it has been decrypted (which it has to be else how will the recipient read it), the law enforcement agencies should have the ability to immediately know who was the first originator, that is, the first person to send the message. The Indian government notified this into law in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Even before this was codified in law, the Indian government had been asking for traceability to counter mis/disinformation over WhatsApp, especially because it had led to public lynchings and murders.
The Indian government had proposed two potential solutions: tagging each message with the originator’s information, and comparing hash values of problematic messages with what WhatsApp/intermediary has. Both have been repeatedly debunked by cryptographers, WhatsApp, Signal, and the civil society.
WhatsApp had called the entire idea “ineffective and highly susceptible to abuse”, which would effectively mandate “a new form of mass surveillance” explaining that to trace even one message, WhatsApp would have to trace every message and will thus land up with “giant databases” with “permanent identity stamp[s]” of every message that is sent on its platform.
I had written about this in much detail here and here.
4. Good ol’ backdoor
This is a repeatedly voiced demand by law enforcement agencies from the Five Eyes, India and Japan amongst other countries — like the infamous and rights eroding Operation Prism, law enforcement agencies should have backdoor access to the services’ servers so that at any time, they can decrypt any message/call and get all the metadata about it. In terms of implementation, this means that the communication will no longer be end-to-end encrypted but rather end-to-LEA-to-end encrypted; all communications will flow through the law enforcement’s agencies.