Elon's Encrypted DMs with Matthew Garrett

Elon's Encrypted DMs with Matthew Garrett

Are Twitter’s new encrypted DMs unreadable even if you put a gun to Elon’s head? We invited Matthew Garrett on to do a deep decompiled dive into what kind of cryptography actually shipped.

Links:


This rough transcript has not been edited and may have errors.

Deirdre: Hello. Welcome to Security Cryptography Whatever. I’m Deirdre.

David: I am David.

Thomas: I’m Thomas.

Deirdre: and we have a special guest today we have Matthew Garrett. Hi Matthew.

Matthew: Good evening.

Thomas: We’re so grumpy.

Deirdre: you. Matthew’s our special guest today because, he’s been doing some reverse analysis and reverse engineering on your Twitter dot APKs, or whatever the extension is for iOS apps. just browsing around to see what, ye olde Elon Musks are trying to deploy in your DMs.

we just thought we would have a little bit of a conversation with him see what he’s been finding with the newly announced, Twitter. encrypted Note that I did not say end-to-end encrypted Matthew, do you just like, decompile applications for fun and profit?

Matthew: I’ll say that it’s mostly not been profitable. The amount of funds varies a great but most times I’m not actually making money out of this, unsurprisingly. But yeah, it started off mostly as, um, part of analyzing IOT devices and trying to write open source implementations to allow them to be integrated with things like home assistant.

Because it turns out as if people don’t document the protocols they speak, the uh, closest you’re going to get to a source of truth is whatever the apps do.

Deirdre: Yeah. Yep. Uh, cool. So that continued over to just, I’m already in the flow of decompiling some apps that, talk to Apple, or they might might be working on your Google device and you’re just ready to just open up whatever is, landing on devices device near you?

Matthew: Yeah. So, um, once you’ve got into the hang of figuring out what the, uh, most straightforward approaches are to doing this, it’s pretty easy to apply it to, um, apps in other kind of spaces as well.

Deirdre: Mm-hmm. So you were already digging around in the Twitter apps. I’m on Twitter. I think you have been on Twitter as well, and you just happened to notice when they started… I don’t know, showing tokens in the de compilation results about… end-to-end encryption? Signal?

Matthew: Yeah. So, people had actually noticed last year that the Twitter app for Android contained various references to Signal.

Um, there were various… hints, you know, references to terminology that’s used generally as past the Signal protocol, bits of code that clearly were intended to integrate with libsignal, the, uh, reference protocol implementation.

Deirdre: Mm-hmm.

Matthew: But then Elon suddenly announced that encrypted DMs would be coming soon. Um, so the sort of natural assumption was, okay, well they’ve probably finished integrating that Signal code. We should see more evidence of that. Cuz the main problem with mobile apps, if you want to launch a feature on a specific day, it’s not like you can just release a new version on that day, typically. Getting things through the review process can take some time, and making sure that everything is lined up is not straightforward. So usually you are going to end up shipping the code early and then having a feature flag to actually enable it. And that way you are also able to actually get testing of it.

Uh, you can turn the feature on for a subset of users, everyone’s carrying around that code, but they’re not necessarily able to make use of it yet.

Deirdre: Mm-hmm.

Matthew: at the point where it’s sort of, oh, we’re expecting to ship this within the next few weeks. It’s a good indication that the code is probably out there already.

So I went in to look to see where this Signal stuff was going, and I was kind of surprised to find out that there was, if anything, less in the way of references to Signal that had few months earlier.

David: Did the original libsignal code show up before, pre-acquisition, post-acquisition, but pre-launch of this? Some

Matthew: I honestly, um, I don’t know. There is an amount of code that is in there, uh, that was very close to the acquisition. But the only versions I looked at were post-acquisition. On the other hand, from pretty much everything I’ve heard, most if not all of the people who actually had worked on that Signal, integration were gone pretty early in the Musk regime.

Deirdre: Yeah. Although there who had left and they… poignantly posted on the internet when this got, uh, when part of your analysis got released, "I left some design documents. I left some notes behind. Please read them."

But, yeah, so there was mention of Signal in the past, in these sort of teardowns and multiple reports, that a prototype proof of concept implementation using libsignal, in Twitter DMs, had been done at least pre Musk acquisition, but what did you find when you, with the thing that actually, shipped week?

Matthew: Yeah. Um, so I’ll start by saying that, they did not invent new cryptography, uh, the way we normally think of that. So, you know, that’s a good start. We’re talking about a case where, uh, the code is making use of fairly well-tested cryptographic primitives. It’s using, not necessarily best-in-class, but things that, you know, I kind of cosplay someone who knows something about cryptography. You are all much more qualified than me to make representations about this. And I’ve had to enlist the aid of actual cryptography people to feel even vaguely confident about anything I’m saying here. So if anything I’m saying here makes no sense, that’s, that’s a me problem, not a you problem. But there’s two parts to this. Um, there’s the… symmetric encryption key that’s used for the actual encryption of the messages, and that’s a straightforward AES encryption in GCM mode.

Deirdre: Cool. All right.

Matthew: I’m told, that’s completely fine. Now, the one thing that is not handled there is, the same key is used for the lifetime of the conversation.

And the moment the implementation conversation is, the conversation ID is the letter E followed by the sender’s, the initial sender’s user id, a hyphen, and then the recipient’s user id, which means

Deirdre: uh,

Matthew: will remain for as long as your user IDs

Thomas: That’s

been, that’s been like part of DMS for a long time, right? Is that they’re they’re identified by that string?

Matthew: Yeah, All that’s actually changed here is that they’re prefixed with the letter E I assume, to indicate encrypted, and differentiate them, um, from existing conversations between two people. So you know, if you’re able to somehow generate enough traffic with that key, you are potentially, eventually going to have nonce reuse, and I’m told that’s bad?

David: Yeah. Earlier it sounded like a nonce got Thomas’s tongue and that joke fell flat cuz we lost Thomas.

Deirdre: Yeah. Uh, some sort of harrumphing around, the fragility of GCM blah, blah, blah. I generally kind of, uh, am in the same wheelhouse, but, we’ve got bigger problems. We’ve got bigger fish to fry here worrying, than worrying about, Elon Musk’s, hand-chosen people, hand-rolling AES-GCM.

I’m not, I don’t think they’re doing that right now. So anyway.

Matthew: Right, so the actual crypto implementation appears to just be BouncyCastle, so a fairly well-used Java cryptography implementation. I’m going to pass zero judgments on how well implemented this is. That’s very much not my problem.

So we have this AES stuff, now, the saving grace, um, that makes it unlikely that you’re gonna see nonce-reuse and we’ll see whether the mere mention of nonce reuse is enough for Thomas to fall offline again.

at the moment, there’s no support for attachments.

Deirdre: Yeah.

Matthew: no pictures, no videos, so you’re only able to send plain text. The risk of you actually sending enough traffic to hit any source of, um, reuse is… not super high.

Deirdre: Mm-hmm. But also,

we don’t have to worry about, correctly handling encrypted attachments because you can’t attach anything. So

Matthew: Well, the problem with the DM format is that the, um, this is very, very much layered on top of the existing format in the simplest way possible and the existing format. Uh, when you send an attachment, the attachment is uploaded to Twitter’s cdn, and then a reference to that attachment is included in the message.

So if you just naively enables encryption, You’d end up with an encrypted reference to an unencrypted attachment.

Deirdre: Yeah, yeah, yeah, yeah, yeah, yeah. You have to handle that as well. And so they’re like, Nope, not doing it. So no attachments

Thomas: at signals attachments are, they’re not unencrypted, they’re encrypted with a, like an out-of-band secret that’s derived from your current session or whatever. right? But it’s not like, it’s not like you’d be sending, like, it’s not like you’d be sending attachments in Signal under the raw GCM key of, you know, whatever your session was.

I don’t think they’re using gcm either, but like,

Deirdre: no. They’re using AES-CBC with HMAC, so it’s authenticated encryption, but it’s, it’s not gcm.

Matthew: So, okay, you’ve got this AES key, and both ends have a copy of this key and you just encrypt a message with that and then you decrypt at the other end. And that’s super simple. But the obvious question is, how do both ends end up with the same key?

Deirdre: Uhhuh.

Matthew: And well, this is like a key exchange problem we have. When I say we, I mean I have no fucking idea how any of this works, but apparently Diffie-Hellman is this sort of magical thing where you say it into a mirror three times, and then both parties end up with a shared secret without either knowing this, as long as you have an asymmetric keypair to begin with.

Deirdre: Mm-hmm.

of a certain variety. And not to be confused with, Triple Diffie-Hellman, which is a different saying Diffie-Hellman into a mirror three times, which this implementation is not using.

Matthew: This implementation is extremely not using Triple Diffie-Hellman. Um, like if I say this implementation is kind of what you might get if you ask Stack Exchange to write you an encrypted messenger protocol, I’d be super happy if my students

Thomas: So,

Matthew: Um,

Thomas: So, trip Triple Duffy helmet is what you would use if you wanted to do an authenticated key exchange, right? So you have, like, you have the two problems of the key exchange, right? You wanna, um, you want both sides to share a secret and be able to encrypt to each other. And then also, ideally, you wanna know if you’re, if I’m talking to user X with key Y, you’d like to be able to prove to yourself that you’re actually, like, you’ve established a key with user X, right?

So, Triple Duffy Hellman would matter if there was a notion of cryptographic identity that you were trying to work with. Right? if you’re just doing like a vanilla elliptic curve Duffy Hellman, then like a man in the middle can intercept and you know, just proxy Duffy Hellman to both sides and make it look like it’s working right? But I assume there’s like some kind of like out of band, like key lookup thing going on here where you know what key you’re supposed to be talking to. You’re not being presented on the fly with new keys. The, the weird thing here is like, it’s my understanding of this from the, the write-ups and from, uh, what Steve Weiss wrote up and all that, right, was that the center is generating an ephemeral key?

Like they’re just coming up with a key on the fly and they’re using that to encrypt to the public key of the receiver, which is like a totally normal way to, you know, build, you know, like an elliptic curve divvy helmet base. like I wanna send a file from user AEADs. Should be, that’s how you would do it, right?

Matthew: I think there is a reason for that, which we’ll get to when we start talking about multiple device support. At least that’s my assumption about why it’s done this way.

So each device, and when I say device, that includes a browser. Now I, my initial assumption was that there wasn’t going to be support in the browser for this because there were hints in the iOS clients that you could only have one active encrypted messaging device at once, and that seemed to preclude any sort of web management.

Turns out I was wrong there. That code’s never called. Um, and we’ll get back to that later,

Deirdre: Okay.

Matthew: but, So every device generates a public keypair using the NIST P-256 curves. Um, like those are not state of

Thomas: They’re fine.

Matthew: best of my understanding. I, I don’t think there’s,

Deirdre: They’re

David: good curves, Brent.

Matthew: Um, but

Deirdre: all, but those ones are fine now. Fine, whatever.

Matthew: It. The main reason people tend to use these curves is that they are what’s supported by hardware implementations. If you want to store your cryptographic keys in hardware such that the private key can never be extracted, can’t be copied if your device is rooted, that sort of thing. Uh, like TPMs P-256, the Secure Enclave on iOS devices and Macs supports P-256.

Android devices with hardware key stores tend to support P-256, and they usually don’t make support for the 25519 curves is generally, much less common. So it’s kind of weird to then discover that while they’re using P-256, they’re just generating these keys in software, and storing them, in the local final system. Uh, in the Android case, they’re actually storing them as preferences, which far as I can tell, means that if you, uh, do a, if you buy a new Android device and plug into your existing one, then the keys will actually be copied from the old device to the new device, which sounds like a nice user convenience, but I’m really not certain that this is, the thought process that resulted in that.

It does also mean that if someone’s able to compromise the um, device transfer protocol, you are going to have a bad time in terms of your device identity being duplicated. So Signal for instance, had, does not make use of the platform functionality for this. It has its own transfer protocol called built in on top of that.

Deirdre: That’s sesame?

Matthew: Uh, I don’t know if it’s part of Sesame or if it’s an additional thing, but basically, um, you need to provide a passphrase on both sides or something. Um, there’s some validation that you are the legitimate user on both ends before that can occur.

Deirdre: it does a sort of daisy chaining of auth from a prior device to a new device and stuff. quite a nice flow that’s evolved over the years. at least for the Android app backup system, that is supposed to be end-to-end encrypted, I’m using air, air quotes here. It’s not well known. The exact properties.

And this is like, this is kind of, I’m gonna chalk this up to sort of marketing versus like iCloud is end-to-end encrypted, and they’ve got more, better end-to-end encrypted stuff than it used to in, in iCloud storage, versus Android and Google and all that stuff.

Matthew: Something I do want to make clear here is that this is purely for the, uh, USB cable physical devices device copying. The keystore, at least on the Android side, is excluded from the cloud backup. So, um,

Deirdre: Okay. All right.

David: I mean, we’re still talking about what’s effectively an XML file in the, like, operating system, right? Any idiot with adb, shell can also just cat that out. I mean, granted, it’s probably, you know who’s doing that on your device, right? But

Matthew: well, it’s constrained by the standard permissions model, so, uh, ADB is not enough. You’re not able to read these files as an unprivileged ADB user. You would need to have a rooted phone to be able to get at that and, you know, don’t do encryption on a rooted phone. That like a bad idea.

So each device generates a keypair using the P-256 curves, and then the public half is uploaded to Twitter.

There’s a new API endpoint, key manager, something like that. And it just registers a key. And right now that just appends that to a list of keys. There’s another endpoint. You hit that and you just get back some json which lists all the public keys associated with the user id, and that’s, an, unauthenticated, endpoint or other I, you may need to demonstrate that you have a valid current Twitter, cookie, but you just hit that and then you can hit that with any user ID and get the list of their public keys.

Deirdre: Oh,

Matthew: Which you how many, so there all that you see is like a public key and an opaque device id. The device ID is randomly generated. Device IDs are per user, so there’s no risk of collisions here. Two users could have the same device ID and that’s not a problem. It’s only used in reference to a specific user.

So that seems okay.

but. That’s it. the keys are not certified in any way. there’s no additional metadata around them. And right now what happens, uh, if you want to message someone is your client, just looks up the user ID for the user that you want to DM

Deirdre: mm-hmm.

Matthew: hits this endpoint, get back a list of public keys, and then goes through the ephemeral key generation dance.

And uses that to do a key exchange, for each of those keys, with the same AES key. So it basically, the first message you sent in the conversation includes… so the first client, generates this AES key, and that’s just from secure random. Like that seems okay, again; generates that, encrypts the message with that, and then does this key exchange dance to generate X encrypted conversation keys. So if the recipient has four devices, you’re going to generate four encrypted conversation keys, one corresponding to each of the public keys. And then those encrypted things are just included in the message, uh, not the message content, but in the, uh, overall protocol message.

So that’s then uploaded to Twitter’s servers. The recipients then connects their device to Twitter, downloads their DMs, and sees that there’s a new conversation. It’s prepended with an E so it knows it’s an encrypted conversation, and so it’s displayed separately from— if you’ve had a previous conversation with the same person unencrypted, it’ll be listed as two separate conversations.

You open that, it sees the, um, set of encrypted conversation keys, and each of those has an associated device id. It finds one that matches its device id, it does its half of the key exchange dance, and then, ends up with an unencrypted conversation key, stores the unencrypted conversation key in a SQL database locally, and will in future, look that up rather than doing this again, and then decrypts the message text.

Now one weirdness here is the encrypted conversation keys are only sent in the first message in a conversation, which means if a user adds an additional device later,

Deirdre: They’re

Matthew: they’ll never get a copy of the conversation key. You can only read a conversation from the devices that were enrolled at the time that conversation was initiated.

David: That’s actually kind of a interesting property.

Matthew: Uh, so it’s kind of interesting in a, this is like, sounds like a, um,

David: Like if you phrase it correctly, it sounds like a security boundary.

Matthew: but

Deirdre: yeah.

Matthew: one thing, like I said, there’s never any rekeying of the AES key, and that’s eventually going to have reuse. But this also means that there’s no forward secrecy.

If anybody obtains that key at any point in time, they can not only decrypt future messages, they can decrypt the past messages as well.

And those messages are stored on Twitter servers forever. Uh, so once you have a valid conversation key, you can decrypt the entire history.

Deirdre: Everything going forward, everything going backward, and in theory, everything going forward, if you just, you know, stay quiet and just let talking, One: I-I-I don’t like it. Putting it on the record, I don’t like it. Two: it occurs to me that there is no, contextual commitment, in the generating of these keys at all. It’s just you, generate an rng and then you just hand it over under this, Diffie-Hellman, encrypted first message. Whereas, in something like Signal, you do your Diffie-Hellman, and then you use that to seed like a KDF, and then you use the output of that KDF, um, which is like hashing and all this other stuff to generate your AES, or at least an AES key, to do your AES-CBC, but also, another nice thing about Signal, you are binding who can generate an AES key to encrypt your messages to the parties who did the Diffie-Hellman. That is not the case when they’re generating this at all.

It’s just a random number generator, and they just handed it off. So that’s one bad thing. Two: as you said, there’s no forward secrecy. It’s the same key, indefinitely forever. Whereas with Signal, like they have the Double Ratchet, so that every message you send, you’re doing another Diffie-Hellman and you’re regenerating a key from that.

And if you haven’t gotten another message from the other party in a while, you do the other part of the Double Ratchet, which is symmetric ratchet, and you just keep hashing and like doing changes, to the AES key that is derived from the whole context of your session, including who did all these Diffie-Hellman’s up at the very beginning.

and so that going forward, you have a different key per message. It’s bound to the people who set up the session in the first place, and if you keep sending messages forward after a compromise, it will heal, which is post-compromise security, and none of that is present in this protocol, and I don’t like it.

Matthew: Yeah. So one other thing to note here is that because the encryption key is, bounce the conversation, first issue, and this is mentioned in their docs: you can only have a maximum of 10 registered devices. If you have more than 10 devices, then you can’t register any new ones. And a browser counts as a device.

If you, uh, reinstall the app on your phone, that counts as a new device. So it’s relatively easy to accidentally hit 10, and there’s no way to remove one of them. That’s not ideal, but my understanding is that, this scenario, the actual conversation key is not bound to the device in any way, it’s bound to the conversation. So every device in the conversation is sharing the same conversation key. In Signal, my understanding is that each message is independently encrypted to each recipient device

Deirdre: Mm-hmm.

Matthew: to, uh, they’re just being a different way of key exchange. And that means that if you revoke a device from your Signal, account,

Deirdre: Mm-hmm.

Matthew: nothing will encrypt messages to that device anymore. So the attacker only has whatever was on there. Uh, even if they can intercept future messages, they don’t have the key material they would need to decrypt them. In the Twitter case, if a device you own is compromised, then even if you revoke access to that, the attacker has your conversation keys and can continue decrypting the conversations in future with that device, even though trust in that device was revoked.

Which you can’t currently do. There’s, you can do it with curl. You can hit the API endpoint at unregister the device, but right now there’s no exposed UI. So, and that’s not ideal. But I think the real thing, and the thing that is kind of expressly called out in the documentation for this, and one of the things I’m really uncomfortable about here is that the documentation is, to be fair, pretty open and honest about, this is not particularly good.

The app says that nowhere. There’s no indication in the app that any of these restrictions, any of these constraints apply. Um, unless you’re going to go and read this help page, you’re not going to see this anywhere.

Deirdre: Yep.

Matthew: But there’s no way for you to verify the device keys that you’re encrypting the conversation keys to. If someone adds a new device, you don’t get notified. If someone’s identity changes, you don’t get notified. And right now that’s not too much of a problem, because of the like restriction of you can’t add a new device, because you just won’t be sent conversation keys again. So like, that is maybe not a big deal.

But at the point where you initially register a device, if Twitter were to just lie, about the recipient’s pubkey, and one they then you’ve got the opportunity for a

David: very ominous, don’t know.

like both ominous and a three Stooges sketch at the same time.

Matthew: So, you have no absolute confidence. Or you can’t even verify this out of band. This information is just not you. could hit the but you don’t know that the version that’s going to be presented to the device is the same as the version that’s going be presented when you do this I don’t anyone at Twitter is going to go to this trouble… maliciously. Uh, I think this would be a, if law enforcement were demand that they do so that to be done. Um, it would be a bunch would be a bunch of engineering work and there would a it would be very

David: So given that, let’s say the alternative’s presumably like all of these DMs and Musk can just run, you know, select star from DMs where or something like that, are we better or?

Matthew: I think this is, it’s very easy to get sort of stuck in the negatives in terms of comparing this to something like Signal. The security guarantees you have much, much weaker, if your threat model is law enforcement or anyone who could compel Twitter to hand stuff awful. You should really not using this the But if your threat model is either Twitter gets compromised and the entire database the entire DM database gets stolen by someone, or an insider is just digging through stuff and you know, maybe that’s Elon, maybe it’s This is enough to prevent it looks like the keys are genuinely keys that exist on the endpoints rather than on Twitter. And, in the absence changing the behavior of the service, there’s no way for an attacker to gain access So it would be a much just, being able pull stuff from a database, which right now they would Right now, all DMs are with luck, they’re encrypted at-rest. terms of If, a bunch of hard drives fell out of some that were being for instance, I would But if you had access to Twitter’s prod environment, right now you would be able to read basically all DMs

Deirdre: Yeah. Or you, uh, you access admin and, someone from the ruling government in India just, uh, really insists that you have to, comply with the new law over DMs. Tick, tick, tick, tick,

Matthew: You know, or alternatively, you Saudi

Deirdre: that too. You don’t even have to, to, to go that far. You employed at Twitter headquarters. But, uh, just, uh, some files back home.

David: I, think we drastically under, as a security community, underrate the value of like, nothing else, moving something from you trust the server and, there’s a database of everything on the server to, you still trust the server, but now that database is gone, um, it’s like a big improvement in a lot of of cases.

Matthew: Yeah, I’ve tried to frame this as this is better than nothing. If you are going to be engaging in Twitter DMs with someone, then yeah, if you are able to flip that encryption on, which right now involves you paying $8 a month unless you’re

Deirdre: God, I forgot about that.

Matthew: Uh, then sure. And as long as you’re willing to accept that, the current model means that, if you only have your phone registered, and if you lose that phone, then you are going to, at the moment lose access to all of those existing messages.

If those are things you can live with, then this is absolutely best than nothing. But as other people have suggested, if you’re in any way concerned about, like actually remaining secure against like state-level actors, then the best way to use this is to use Twitter encrypted DMs to get someone’s Signal number, and then, and possibly their safety number, and then transition to Signal.

Deirdre: Yeah. I think people are more comfortable with ephemerality in texts than some people would like to believe, at least in texts that are associated with some other service. If the service is primarily, texting and chatting like WhatsApp or like Signal. I think there’s a different kind of expectation.

Like people get sad if, uh, sometimes the photo you sent them disappears after four weeks, which is my default in Signal. But I think Twitter DMs are a different bottle of wax or other DM systems I think could tolerate a lot more ephemerality, but that’s my personal opinion.

Matthew: So the choice not to implement— well, the documentation asserts that not using forward secrecy was a deliberate choice. And it was deliberately to ensure that access to messages could be retained. Now, right now, the implementation means that even though they made— well, whether this is a… we have not seen any indication of what the design goals were.

There’s no design documents we can look at with a bunch of objectives that look at does the implementation meet those? There’s a promise this white paper will be released at some point. The lack of forward secrecy, could be a, ‘this is hard, let’s not do this’, and then let’s assert, ‘well, actually there’s this feature benefit from not doing it’. So, cool.

Or it could just be, um, "it’s is hard, let’s not do it", and, let’s pretend that it was for good reasons." Uh, it’s impossible from the outside to make that determination.

David: or it could be, we’ve never heard of forward secrecy.

Matthew: Well, yeah. Um, so I did email, the people involved in this project, a week and a half before they launched. And raised several of these concerns, and did not hear anything back. which I fair, uh, if I were working for Elon, and suddenly got an email from some people saying, "Hey, this feature, it might actually be bad, maybe you shouldn’t launch it", then I would very much be, uh, "I did not receive this email", I suspect.

Deirdre: Mm-hmm. I’m reading their help center page: "Our customers expect their unencrypted DM history to be stored in the cloud downloadable on any device they’re logged into. Unfortunately, this user experiences doesn’t work well with forward secure messaging protocols." That’s not true, because Signal

stores the encrypted messages until they can be delivered.

Um, and then when you sync to a new device, there is a syncing of the encrypted messages.

Um, and then you process them in order. and that is how you get

the forward because all the messages depend on the previous messages before them, and bob’s your uncle. You can regenerate them and do them in a compromised, secure manner,

Matthew: Uh,

the protocol level that may be possible; at

a practical level in signal. if you add a new

machine, then it does not receive, uh, it does not sync any of the old messages from the existing devices.

Deirdre: You

have to tell it to pair and you either pair like you, not via signal service, delivery service. you kind of go the other way around,

but I don’t know if it’s actually doing the

ratcheting it has to preserve some session. Between devices, including like literally the ratcheting session and the message history, to continue that forward secure session there.

So there is some of that, happening under the hood. and doing all that work is it’s work,

but

the fact that they’re

saying, uh, doesn’t work well with forward secure messaging protocols is incorrect.

it requires a little bit more work, but It’s very doable.

David: I have

never synced signal message history between two phones. You, I don’t uh, in, I bet that in this, uh, podcast at most, two out of the four of us have ever done this, if that

Deirdre: I’ve done it

multiple times. Multiple

computers and phones.

David: Yeah

so Thomas

and I haven’t, I don’t know, Matthew, have you done this before?

Matthew: Whenever I’ve added a new

device, it has said, uh,

old messages are not synchronized to new devices for security reasons, but I use Android. Maybe is a different universe.

Deirdre: I use Android and

Signal Desktop, so I don’t know, maybe it’s a signal desktop thing,

David: I just get a new phone and then my security number changes, and then

no one cares But there is, you know, some truth to the fact that maintaining forward secrecy is hard. If you, If you, say like cloud-synchronized

messages also

kind of implies like you don’t have the other

device sitting around and a

USB

cable or a Bluetooth connection. there are solutions here, but like, I, I don’t think what signal does is feasible.

Matthew: I really do not

want to think about Twitter

trying to implement stuff in SGX.

Deirdre: well, I

mean

you can do this

without sgx, signal chose to use SGX to underpin their secure backup thing with pass keys and

Thomas: past the

cat is already out of the bag

Yeah. Who you’re talking to on Twitter. There’s, there’s There’s no there’s a, I mean, it’s. It’s mentioning, right, is that like the whole time talking and we’re comparing this to Signal, and it’s like, there’s not really much of a comparison, right?

Yeah, no. most of the Signal is just that law enforcement people can’t ask ‘Who were you talking to?’ Right? Forget the message content. Right? Like just a huge amount of the engineering decisions and the UX compromises and stuff.

those UX compromises they’re making, it’s to avoid answering the question, the metadata question you know Yeah. What users you’re talking to, right. Which the reason for that is that metadata is often as valuable as the message. you know, an evil ex-partner or something like that, that working at Twitter, that’s, you know, abusing their, their ex spouse or whatever, right? Mm-hmm. The list of who they’re talking to is probably enough for them to, you know, uh, you know, for them to do harassment stuff, right?

The content might not be as important to them as, like, did, did you talk to your doctor? Or this like person or whatever, right? So, yep. like, some of the most important stuff that we’re concerned about, Twitter isn’t even touching.

Deirdre: Yep.

Thomas: Not even.

Deirdre: Yep.

Yeah. Not that the user in the app will be able to distinguish this, but I will credit their, help center article that says, these are encrypted, but they are not end-to-end encrypted, because of the, uh, the Musk-in-the-middle of it all.

So, you know, metadata security aside, which even WhatsApp doesn’t have a good, uh, story to; WhatsApp, that’s kind of where in the ranking system, if you wanna rank Signal at the top, and then WhatsApp underneath, is because they do have a lot of access to this metadata about who is talking to whom, and who is in a group, because they don’t have group membership privacy from WhatsApp, the service, and that can be subpoenaed. Yeah, Twitter’s not even touching that, and they’re not even calling it end-to-end encrypted DMs, but they are, they are encrypted. So.

Matthew: And I think in a very technical sense, they are end to end encrypted in the sense that, the keys are genuinely on the end points, not somewhere else. But you’ve then got the problem that Twitter could right now just become one of the ends if they

Thomas: Yeah, I mean that’s, like, is like the classic breakdown of end-to-end encrypted messaging and it’s like, it’s, it’s the it’s— forward secrecy is not the number one problem in messaging, right. It’s, you know, key identity and group membership. I like the number one problem in that. It’s the reason why things that are supposedly end-to-end, end up not being end-to-end.

It turns out the server can just add themselves to groups or register themselves as a key, and then all the rest of the mechanism is irrelevant. Right.

Deirdre: mm-hmm.

Matthew: I think it’s actually interesting to talk about the group aspects of this a little, because the architecture that they’ve chosen here, the obvious way to extend this to support group messages, which it currently doesn’t, is what, the behavior in Twitter at the moment is if you’re added to a group, then you potentially have access to older messages in there as well.

And right now that was implied, well, you are going to need to share the existing conversation key with new members of the group, which because of lack of forward secrecy, and, because nothing is end-to-end. Uh, not sorry, because, uh, none of the keys are use a specific, adding one new member to a group would then result in that member having a key that can decrypt old messages, even if you haven’t forwarded the old messages to them.

So it seems hard to kind of imagine how to extend the choices they’ve made here into supporting more complex use cases. The set of decisions here was fairly pragmatic in terms of use the simplest set of crypto possible that will achieve the, I want to talk to one other person in something that you could kind of squint at and say, I guess it’s end to end encrypted in theory, if not in

spirit.

Uh,

but

extend that to do everything else people want in an encrypted messenger is going to be extremely hard without basically rewriting

all of this from scratch.

Thomas: It’s, it seems like a basic thing you can say about this system where there’s like an unauthenticated, you know, p-curve, Diffie-Hellman exchange to like send things. It’s like the most naive possible way of doing public key encryption with ECDH and it’s like a single GCM key for, you know, uh forever. You know, DM session, it seems like one thing you can probably say about the system is that no cryptography engineer reviewed the design.

Matthew: So,

Thomas: you tell you two are more qualified than I am to make that claim cause I am not a cryptography engineer, but my guess is that no cryptography engineer reviewed this.

They might have looked at the code and said, you know, the way you’re doing nonces is gonna be fine, although this is terrible and could blowup eventually. But nobody looked at that design and said, this is how we should do it. Like, there are just basic things that you would do differently if you were being cryptographically competent.

Matthew: So the night before this feature launched, Chris Stanley, who is technically a SpaceX employee, rather Twitter employee, but is currently leading large parts of Twitter’s security work, he’s the guy who, the photos from the first all-nighter that, uh, Twitter 2.0 pulled, the guy who’s in the front taking a selfie with a bunch of people wearing an Occupy Mars shirt,

he said on Twitter that, this had been audited by Trail of Bits, and that Dan Guido was an absolute badass. Platformer, today’s Platformer that just came out, actually says that Twitter sources indicate that that audit hasn’t started yet. He later sort of backtracked and says the audit wasn’t finished yet, but it sounds like they launched this without any external audit.

Deirdre: Yep.

But,

Thomas: Um, in terms of,

uh,

Deirdre: choices, that, if you, uh, if you had a cryptographic engineer actually implementing this thing, they are using BouncyCastle BrokenKDF 2Bytes generator.

Thomas: I mean, hold on. Just to start with, right? Like you, wouldn’t have a single GCM key for the entire length of a conversation.

Deirdre: Correct. No, you would not. You just— no. Like you can, at the very least, you can have a counter. You can keep a counter or something, or hash in previous messages in the fucking session or the context and to help generate next keys. Like you don’t have to do Double Ratchet, but you can do something involving a KDF.

Thomas: This is, this is zero ratchet, right? Like this is, this, is

this. I mean, you know, it’s like we establish a key once and then we live with it forever,

right? But like it doesn’t take a whole lot of effort to do it key exchange, right? The premise of Double Ratchet, is like you, you you do key exchanges every possible time. You can conceivably come up with through a key exchange, right? You could even,

for

starters,

Deirdre: Yeah. You could just be like in a TLS session, which is like, Ooh, we do a dumb Diffie-Hellman, and then you use a KDF to generate from that shared secret, an actual AES key that’s bound to the session. But they didn’t even do that.

So, I don’t

Thomas: I don’t know. I’m, I’m just saying like from the, the design documentation I’ve seen, this isn’t a way of saying the design is terrible, or that whoever reviewed it is not competent, right. It’s just, if you are a specialist, cryptography engineer, there’s a bunch of things that you would do in a design, that aren’t here.

Right? Like the simplest possible thing you can say is like, there, there’s no key separation.

Put any like domain.

David: I, think you’re being, a little harsh Thomas, Twitter has successfully rebuilt the system with the same properties as Matrix in far less time.

Thomas: That’s, again, it’s not really a critique of the security of the protocol, but we’ve been telling people for two years now that if you wanna appear cryptographically sophisticated, all you have to do is sprinkle domain constants over everything, and then people will assume you got things right.

They didn’t even do that

Deirdre: if they were to podcast— I don’t, I don’t think anyone at Twitter hQ is listening to our podcast.

Thomas: We don’t charge for this. They could just listen to it and take our advice and not credit us.

David: I feel like every third episode, we spend at least 10 minutes explaining how to build an end-to-end encrypted messaging system, or various aspects of it.

Deirdre: Catnip, baby.

Matthew: So one of the consequences of the browser being a device from this perspective is anytime you log into Twitter in a browser, if you’ve not previously logged in Twitter, it generates and registers a keypair, and that keypair is then going to be sufficient to be able to gain access to any conversations that’s initiated after you did that.

So on mobile devices, you know, we have a reasonable amount of app to app separation. if I get a compromised calculator app onto your phone, that can’t just read data belonging to other applications. On desktop, that’s basically not true. If can get any malware onto your system, that can reach out to any data that is stored in your browser’s localStorage, and that’s going to include this key material, that’s going to include any existing conversation keys that are stashed locally.

That’s going to include the keypair that you could then use to reconstruct them even if they weren’t. And now the same to be fair is true if you’re using Signal Desktop or the online version of WhatsApp anywhere. But I think there’s a distinction here in that that’s a very conscious thing that you do. And I don’t think these security trade offs are necessarily particularly well communicated. But also if I then unregister my Signal Desktop app, then what stuff in the past may been compromised, nothing in the future’s going to be, whereas in the Twitter design, that’s not the case: which means like if I’m logged in to Twitter on any machine that I then lose or which gets taken away from me at immigration, all those conversation keys potentially have to be considered compromised, and there’s no way to force key regeneration.

Deirdre: Yeah.

Matthew: That conversation, now, if I’m DMing someone, if one of my machines is stolen and they keep DMing me, there’s no way for me to stop that being done with a key that is now potentially under malicious control.

Deirdre: Yep.

David: Also if the browser, every time you log in is a new key, does that mean if you log in to like a browser 10 times? Because

for whatever reason, you’re done. I feel like I, I feel like I log into Twitter like, at least like three times a week on

Matthew: If it’s the same browser right now, logging out doesn’t delete the data. That’s, again, listed as a caveat somewhere in the docs. But so if you open an incognito session and log into Twitter, then yeah, every time you do that, that’s a new device. If you do it in the same browser without, uh, I, I’m sure if you do this in Brave or something, then sure it probably behaves in some sort of very privacy-preserving way.

So privacy-preserving, you lose the ability to send encrypted DMs quite quickly. But, uh, in Chrome, whatever, uh, logging out and logging in again should use the same keypair.

Deirdre: As long as you don’t clear it.

Matthew: As long as you’re don’t clear localStorage. Yeah,

Deirdre: yeah. Hmm. I, uh, I blow away storage frequently, so, yeah. But then again, I used to be a web developer.

Matthew: I think it is pretty clear that we are probably not the target audience of this feature.

Deirdre: Fair. Yeah, but it makes a, oh god, this, people are gonna run into failure modes with if they try to use it.

Thomas: Probably nobody is the target audience for this feature, right? Like, oh, the, there isn’t a, real, there isn’t a real threat model of this target,

Deirdre: The target audience is ‘make Elon happy.’

Matthew: Yeah, Elon is the target audience. Um, being able to tell Elon, yeah, you can take this box and these messages will be encrypted, is basically the entire story here.

Deirdre: Yeah.

Matthew: Uh, I kind of am sad that they did announce the weakness— well, obviously I’m glad that they did document the fact that this is weak and Elon was sort of forced to admit that it was weak, cuz we did have the opportunity to actually hold the gun to Elon’s head and force him to read people’s DMs. Not that I’m suggesting we should actually do so without consent from all involved.

Thomas: But this has like, this has kind of, kind of similar security dynamics to using like a good password hash, right? Like it’s not so much a protection for Twitter’s users how much as it is a protection for Twitter. like it’s, it’s gotta be comforting for them to know that they don’t have, uh, that, that it’s become harder to get people’s DMs or will be when the features rolled out and on by default, right?

Deirdre: It’s not gonna be on by default. It’s only for Twitter blue subscribers.

Thomas: you have to, why are we talking about this?

Deirdre: It’s, only available for Twitter Blue subscribers. It’s not even on by default for twitter blue subscribers, if I understand correctly. So it’s like layers and layers of like, you have to make a specific choice to even get access to this, okay, I guess we’ll call it encrypted DM feature.

Thomas: So what they’ve really what they’ve really reinvented here is Telegram.

Matthew: Technically, it’s not even Twitter Blue subscribers. It’s verified users, which is slightly larger than Twitter blue subscribers. For the most part. If you’re part of a verified organization, you get one. But it turns out something I haven’t realized. When you subscribe to Twitter Blue, there’s then still a several day period where you are paying money, but you are not verified. Something’s being done in the background to decide whether or not to verify you.

but. you. can pay money without getting this feature.

Deirdre: I, I am sad.

David: Well, thank God Stephen King can finally send an encrypted message to LeBron James.

Thomas: I’m, I’m not sad. I, I, You know, I’m, I’m like, I’m receptive to, you know, to Matthew’s argument at the beginning here about, you know, if you’re, if, if you’re worried about somebody breaking into Twitter and getting their data, it’s, become harder to get some dms now, whatever. But like, don’t use, don’t use the, what am I trying to— don’t use DMs! They’re bad!

Like the, the best thing, Twitter could possibly do here is just stop having DMs. It’s like, it’s a malignant feature. It shouldn’t exist anywhere supposedly. There’s, there, there are dms on Mastodon. I, I wouldn’t know. I refuse to click the button. Right? But

Matthew: Uh, yeah. The DMs on Mastodon are basically something that just controls the visibility of the message by default, they’re not actually, um, one-to-one in any meaningful.

Deirdre: yeah, don’t. They’re a very different animal on Mastodon.

Matthew: It’s basically a public message, that is not by default displayed.

Thomas: don’t

Deirdre: Don’t click that button, Thomas.

Thomas: That’s, that’s, that’s really excellent.

David: Look, BlueSky exists. Now we can stop pretending that Mastodon is good.

Deirdre: Yeah. alright, thanks Matthew. Is there anything that we missed, in this journey?

Matthew: No, I, I think really, um, like saying this is bad is, it really depends on your threat model. I, I think,

Deirdre: Mmm

Matthew: I think the word that to me seems most appropriate for describing any of this is ‘naïve’. It’s, it’s not great, it’s not well thought through, but it’s, assuming that there is any kind of actual coherent design goal, then this plausibly meets it.

But what worries me is not whether this does what it’s supposed to do, it’s whether the people who end up using it understand what it’s supposed to do, and what it actually does. And I think Elon’s description, for an extended period of time, including on national tv, that this was intended to prevent Twitter from being able to read messages and the initial implementation is really a long way from that, means that there is a risk that people are going to place trust in this, that should not be there.

And being, I think there’s a wider conversation about over-promising on security and privacy features that, um, society hasn’t really had yet. And, the degree to which, like, from my perspective, this should be considered grossly unethical. On the other hand, this is a kind of, well, okay, Twitter DMs may be horrifically insecure in various ways, but the people who are most likely to trust them are the sort of people who I would be most enthusiastic about, suddenly, unexpectedly discovering that law enforcement has their DMs: "who can say whether it’s good or bad."

Deirdre: Yeah. a hundred percent agree. And I personally think, uh, end to end encryption should be the default for private communications, which DMs ostensibly are. but oh well, Matthew, thank you very much.

David: Thank you.

Matthew: It has been a pleasure.

David: Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek, and David Adrian. Our editor is Netty Smith. You can find the podcast on Twitter @scwpod, and the hosts on Twitter @durumcrustulum, @tqbf, and @davidcadrian. You can buy merchandise at merch.securitycryptographywhatever.com. Thank you for listening.