The past few years have seen some amazing progress in the deployment of encryption protocols. In less than a decade, encryption protocols like TLS have gone from a novelty to the “table stakes” for running a secure website. Smartphone manufacturers have deployed default device encryption to billions of phones, and and end-to-end encrypted messaging and phone calls are now available to more than two billion users.
This progress hasn’t come without a price. In the U.S. and around the world, law enforcement agencies have begun to express concerns about potential loss of access to criminal devices. Some (not entirely well-thought-through) laws have been proposed overseas, and a few have been proposed over here as well. The Department of Justice has recently taken up the call, asking companies to deploy what they call “responsible encryption“.
What is “responsible encryption”? Well, that’s the problem. Nobody on the government side of the debate has been willing to say. In a recent speech, U.S. Deputy Attorney General Rod Rosenstein implored cryptographers to figure it out.
In the midst of this debate, a recent article by GCHQ’s Ian Levy and Crispin Robinson is a breath of fresh air. Unlike their U.S. colleagues, the folks at GCHQ — essentially, the U.K.’s equivalent of NSA — seem much more open, and willing to make technical proposals. Indeed, they make one proposal in the article above: a new solution designed to surveil both messaging and encrypted phone calls.
In the rest of this post I’m going to talk about their ideas as fairly as I can — given that I only have a high-level understanding — and discuss what I think could go wrong.
A brief, illustrated primer on E2E
To give some intuition on the GCHQ proposal, I first need to give a very brief explanation of how end-to-end (E2E) encryption systems actually work.
The bottom-line idea in E2E communication systems is to encrypt messages (or audio/video data) directly to the recipient’s device. In principle this bypasses the need to rely on trustworthy infrastructure run by your provider, including vulnerable servers that can be compromised by attackers. If you’ll forgive my silly illustrations, the standard picture looks something like this:
In the group chat/call setting, the picture changes slightly. But it’s still the same idea. Each participant encrypts his or her data so that only the other(s) can read the messages. The government and service provider are locked out
The problem here is that these simplified pictures miss all of technical details. In practice, one of the most challenging problems in any encrypted messaging system stems from handling proper key distribution. Before I can encrypt to you, I need to get your keys. So in reality, systems like Apple iMessage, WhatsApp and Facebook Messenger actually look more like this:
The Apple at the top of the picture above stands in for Apple’s “identity service”, which is a cluster of machines running inside of Apple’s datacenters. These servers do many things, but most notably: they act as a directory for looking up the encryption key of the person you’re talking to. If that service misfires and gives the wrong key, the best ciphers in the world won’t help you. You’ll be encrypting to the wrong person.
In some group messaging systems like WhatsApp and iMessage, the centralized portions of the system also handle membership of group chats. In these poorly-designed systems, the server can add and remove users from a group, even if none of the participants want this. Speaking figuratively, it’s as though you’re having a conversation in a private room — but the door is unlocked and the building manager controls who can come in and join you.
(A technical note: in many cases, group and individual messaging is basically the same thing. For example, on systems that support multiple devices connected to a single account, like Apple’s iMessage, don’t make a distinction between group chat and individual chat. In those systems, you can think of every iMessage conversation as being a group conversation, even if technically the communication is just between two user accounts. That’s because each device attached to the two accounts can be thought of as a “participant” of this group conversation.)
Most e2e systems have basic countermeasures against the flaws described above. For example, each of the above systems is supposed to inform you when someone joins your group chat, or adds a new device to your account. Both WhatsApp and Signal expose “safety numbers” that allow participants to verify that they received the same cryptographic key, which offers a check against dishonest providers.
All of this brings me to the GCHQ proposal.
What GCHQ wants
The Lawfare article I cited above does not present GCHQ’s proposal in any great detail. Fortunately, both Levy and Robinson have spent most of the summer on the road, giving several public talks about their ideas. I had the privilege of speaking to both earlier this summer when they visited Johns Hopkins, so I think I have a rough handle on what they’re thinking.
In its outlines, the idea they propose is extremely simple: the goal is to take advantage of the weaknesses in existing identity management systems, by adding a “ghost user” (or in some cases, a “ghost device”) to existing group chat and calling sessions. In systems where group membership is updated by the provider infrastructure, this can mostly be done via changes to the server-side components of the system.
I say mainly, because there’s a wrinkle. Even if you modify the server side to add unauthorized users to a conversation, most of the existing E2E systems will notify your targets that a new user has joined the conversation. Generally speaking, having a stranger appear in someone’s group chats is a pretty solid way to tip them off. While the GCHQ proposal doesn’t go into very much detail about this, it seems to follow that this proposal would require providers to suppress those warning messages at the target’s device. So there will be changes at the client application as well as the server-side infrastructure.
(Certain designs like Signal are already hardened against these changes, because group chat setup is handled in an end-to-end encrypted/authenticated fashion by clients, and the server can’t just insert users. But both WhatsApp and iMessage currently seem vulnerable to GCHQ’s proposed approach.)
Hence, the GCHQ proposal represents a very significant change to the design of messaging systems. Presumably the necessary client-side code changes would have to be deployed to all users, since you can’t do targeted software updates just against criminals. (Or rather, if you could rely on targeted software updates, you would just do that instead of the thing that GCHQ is proposing.)
Which brings us to the last piece: how do you get providers to do all of this?
While optimism is nice, it seems unlikely that communication providers are going to to voluntarily insert such a powerful eavesdropping capability into their encrypted services, if only because it’s risky. This presumably means that the UK government will have to compel them to do so. One possible means is to use Technical Capability Notices, which are part of the UK’s Investigatory Powers Act. Those notices mandate that a provider offer real-time decryption for sets of users ranging from 1-10,000 users, and moreover, that providers must design their systems to ensure this capability is available.
The last part is a bit of a problem.
Providers are already closing this loophole
The real problem with the GCHQ proposal is that it targets a weakness in messaging/calling systems that is well known to providers, and moreover, a weakness that providers have been working to close — perhaps because they’re worried that someone just like GCHQ (or much worse) might try to exploit it. GCHQ making this proposal virtually guarantees that those providers will move much, much faster.
And they have quite a few options at their disposal. Over the past several years there have been several proposed designs that offer transparency to users regarding which keys they’re obtaining from a provider’s identity service. These systems operate by having the identity service commit to the keys that are associated with individual users, such that it’s very hard for the provider to change a user’s keys (or to add a device) without everyone noticing.
Similarly, advanced messengers like Signal have “submerged” the group chat management into the encrypted communications, so that the server cannot add new users without the approval of one of the participants. This design, if implemented in more popular service, would seem to kill the GCHQ proposal dead.
Of course, these solutions highlight the tricky nature of GCHQ’s proposal. Note that in order to take advantage of existing vulnerabilities, GCHQ is going to have to require that providers make changes to their system. But once you’ve opened the door to forcing providers to change their system, where do you stop? What stops the UK government from, say, taking things a step farther, and using the force of law to compel providers not to harden their systems against attacks?
Which brings me to my real problem with the GCHQ proposal. As far as I can see, there are two likely outcomes. In the first, providers harden their system and kill off the vulnerabilities that make GCHQ’s proposal viable. The more interest that governments express towards the proposal, the faster this will happen. In the second outcome, the UK government (and perhaps other governments) force the providers not to lock them out. This second outcome is what I worry about.
More specifically, today’s systems may include existing flaws that are easy to exploit. But once law enforcement begins to rely on those exploits, the systems can never change. The agencies are going to rely on those flaws, possibly forever. Over time what seems like a “modest proposal” could lead us to world where GCHQ becomes the ultimate architect of Apple and Facebook’s communication systems.
This is not a good outcome, and it’s one that will likely slow down progress for years to come.