Threema with Kenny Paterson, Matteo Scarlata, & Kien Tuong Truong

Threema with Kenny Paterson, Matteo Scarlata, & Kien Tuong Truong

Another day, another ostensibly secure messenger that quails under the gaze of some intrepid cryptographers. This time, it’s Threema, and the gaze belongs to Kenny Paterson, Matteo Scarlata, and Kien Tuong Truong from ETH Zurich. Get ready for some stunt cryptography, like 2 Fast 2 Furious stunts.


This rough transcript has not been edited and may have errors.

Deirdre: Hello. Welcome to Security Cryptography Whatever. I am Deirdre.

David: I’m David.

Thomas: I’m Deirdre’s dog.

Deirdre: Yes, one of them. We have some special guests today from ETH Zurich. We have Kenny Paterson.

Deirdre: We have Matteo Scarlata.

Matteo: Hello everybody.

Deirdre: Hi. And we have Kien Tuong Truong. Did I say that right?

Kien: Yeah, that works.

Deirdre: Hi! We invited them on today because they put out a new research paper and we’re excited to get you in very close to you publishing your results from your analysis of the messenger Threema, doing a cryptographic analysis of yet another, trying to be secure messenger that’s used by lots of people. And you found some issues . Kenny, do you wanna give us a high level overview of the results of your analysis?

Kenny: Yeah, sure, for sure. Maybe I could start just with a little bit of motivation, why did we check out Threema, because there’s like hundreds of these apps on the market, right? So we work in Switzerland, we actually work for the federal government, cuz ETH is a federal university, and the Swiss government is using Threema, and the Swiss Army in particular is using Threema. In fact, it’s the mandated messenger for Swiss Army personnel. They have to use it. Alongside that, Threema has something like 10 or 11 million customers now, or users. That’s a really small number by secure messenger standards. Like, Telegram has 750 million, so an order of magnitude larger, two orders of magnitude larger.

Deirdre: And WhatsApp’s in the billions.

Kenny: WhatsApp’s like 3, 4 billion or something now. Yeah, so it’s quite small, but it’s very important in Switzerland and we are all Swiss now. We also noticed some news reports saying that Olaf Schultz was using it. Olaf Schultz is the relatively new chancellor of Germany, and apparently somebody managed to snap a paparazzi photo of him holding a phone, and it was clearly, he was logged into Threema at the time. We don’t know who he was sending messages to, maybe, maybe his Russian handlers, uh, who knows.

Deirdre: Ooh.

Kenny: Sick burn. And, and then also the final bit of motivation is people kept asking me about it. So we did some research earlier, people in my group worked on Signal, doing security proofs, so Ben Dowling and others. And then after that we looked at Telegram, we worked with Martin Albrecht and Lenka Mareková, who you know.

Deirdre: Friend of the pod!

Kenny: Yeah. And, uh, indeed we found some really interesting vulnerabilities in Telegram. And I was giving talks about this around Switzerland to various industry events and things. And then people kept saying, but what about Threema? What about Threema? And I had to say, first of all, I’ve never heard of Threema. And then I said, I’ve never analyzed Threema. And then I would say, well, we’ll get to that. And then finally, finally we got to it. So Matteo and I came up with, a master’s thesis project, which Kien was crazy enough to sign up to. Uh, and, and this was really Kien’s masters, so that was all the motivation.

Deirdre: Very good.

Matteo: Maybe a bird’s eye view of our findings: I mean, you can split them in findings against the client to server protocol, which is this funny implementation of like, oh, we don’t like TLS, we will write our own, but better, but not really. And then, findings against the end-to-end protocols. And that’s the actual core of an encrypted message, messenger. And that’s also quite lacking. And finally, findings about like other related protocols, like their backup solution and, there are some, some very funny ways these protocols can combine and interact between each other.

Kenny: This is what Thomas likes to call “stunt cryptography”, I think, right Tom?

Thomas: Not, not the whole enterprise of it. Right. Just, we’ll get to stunt cryptography. And I’ll ask the hard questions about the stunt work. I do have a question to start with, right. We did a sort of interview, um, a couple weeks ago with the team that did the Nebuchadnezzar Matrix paper, which I thought was thrilling. And the general M.O. of that team was, here’s a messenger, what we wanna do is we’re not setting out necessarily to find specific vulnerabilities, but rather just come up with the formal model that we could use to apply to— and then like, you get like, you know, five minutes into trying to apply a formal model and the whole thing blows up, and then from that you get the vulnerabilities. Was that the, like the general methodology here, going into this, did you have a plan to fit Threema to a particular formal model?

Kenny: It was definitely on the table. So, we approached it from a neutral standpoint initially, basically saying we want to understand the security. If that means, building formal models and doing security proofs, great. That’s what we’ll do. Uh, if that means finding vulnerabilities, that’s what we’ll do. And, and so you kind of set out, hopefully, that you’ll find something either way.

We started to find vulnerabilities and once you start finding one, you get a taste for it, you get a taste for blood, and then you go from there.

It’s actually almost a win-win, right? You either prove it secure or you break it. It’s not usually the case that it sits in the middle, and it turns out that, you know, pretty early in the process, I guess Kien really should say more about this. We started to find vulnerabilities and once you start finding one, you get a taste for it, you get a taste for blood, and then you go from there. So maybe Kien can say a bit more about that.

Matteo: Like, you look at the models used to prove security of these things, and then you’re scared, and rather than proving it, you want to find the attacks. I think Kien looked into MSKey for a while and then came up like, after one week, with three attacks and we’re like, oh yeah, this also works.

Kien: Yeah, I got the MSKey paper on my table and then I just threw it off the table as soon as I found the attacks. Well, I mean, you start with, uh, reading the whitepaper and then you see that they say, "Hey, we built our own custom protocol for the client server." And then you get the feeling that there might be something wrong, but then as soon as you find the first attack, you are sure that there must be more, and then you just go on.

Thomas: Yeah, like in my previous life, this is what I did full-time, was just finding vulnerabilities and like we’d say, blood in the water. As soon as you find something, right, you’ve gone psychologically from, it’s possible that you’re not gonna find anything here, it’s possible that the dev team that you’re up against is just really on the ball, and then you find it, like, you find some big slip and it’s like, okay, they’re definitely not on the ball. And it’s just a question of how many we can find, which with seven findings in three seems like exactly what happened to you guys. So I guess like, because you guys barely knew what Threema was, I’m guessing that a lot of our audience doesn’t know what Threema is. So like, Threema is a secure messenger. We all get that. And it’s like, it’s a– it’s the Swiss mandated, secure messenger. But like, can we describe how Threema works? I think people might generally be sort of familiar with what OTR looks like, or maybe even OTR, so like maybe there’s a way to contrast it, but like, give people an intuition for just how Threema works.

Kenny: Yeah, I mean, architecturally it’s very, very similar, similar to Signal. So you have clients who want to communicate securely, and then you have a server in the middle that’s doing kind of, uh, message forwarding. There’s actually two main communication protocols. There’s a client to server protocol that’s protecting the client to server communications, uh, much like TLS could do, but this is, but it’s not as good as TLS. And then you’ve got the end-to-end protocol, which actually runs formally speaking on top of, you know, we’re thinking about protocol layering. The end-to-end protocol runs on top of the client to serve a protocol, and there there is essentially a static Diffie-Hellman key exchange. so you know, users register keys and then you use the same key forever to communicate between Alice and Bob. And those messages are layered on, top of, they’re protected by the client to server protocol.

Deirdre: Really, so there’s no ratcheting or—

Kenny: No; so the version we analyze, there’s no ratcheting, there’s no forward security. And we, we talk about how did Threema update, but yeah. So actually that leads really nicely to the very first attack because essentially it gives you the same power as if you had the long-term key.

So I know Kien maybe you should you could explain a little bit how that first attack

Deirdre: So this is attack one in the, no server, no clients are compromised model. It’s just regular-schmegular model.

Kien: Yeah, correct. And uh, it does assume that somehow you can get the ephemeral key, and that is a little bit of a more theoretical attack because we don’t know how you would get such an ephemeral key. But still, if you can steal an ephemeral key, then it is equivalent to stealing the long-term key because then you can just authenticate as the user forever.

Deirdre: Mmm.

Kenny: It’s

Deirdre: that

Kien: And indeed you shouldn’t like it. But it does also resemble another attack that we, that there was on OTR at some point and it was already found back then. And uh, it’s interesting how history repeats itself even in vulnerabilities.

Thomas: So this is a vulnerability in the C2S handshake in the, like, this is the, the lower protocol that you use to establish connectivity with Threema before you do end-to-end messages over it. And I, I don’t have the paper in front of me, but this is the one where we’re replaying the vouch box that are vouch box is a thing in cryptography. Should I not be making fun of the term?

Kenny: the first claim. We didn’t go

to.

Thomas: Okay. So e explain a handshake that, involves a vouchbox.

Kenny: It’s like a cryptographic blob that somehow proves, is meant to prove that, you know, some key, it’s actually an encryption of a Diffie-Hellman ephemeral value plus some other fields maybe. But it turns out that it’s replayable, essentially. And the idea is that if you can get hold of the secret part of the ephemeral Diffie-Hellman value, then you now know everything that’s required to complete the handshake basically.

Matteo: So it’s, it’s like asking for, uh, signed value for authentication. In this case, it’s not signed is AEAD, but then once the client produces this signed value, you can just use it again and again and again. There, there was an attempt to provide freshness to this value, but if you go into details of how the protocol works, you can just also replay all, all the other fresh or supposedly fresh values.

Deirdre: this some sort of like, you have to fall back to some, I If you’re not able to provide enough freshness to, to keep this from being replayable and they, it, the fallback is just like a down, almost like a downgrade attack.

Matteo: No much better than that. Um, so the, the , the freshness was the, the client ephemeral, but the client can just pick one plant, ephemeral and use it forever. Or in the case of the attacker, you, you will pick the same ephemeral forever.

Kien: and in fact, Interestingly, the case always with the application because even though it says that it’s an ephemeral key, it just keeps reusing it. And for seven days straight, if you do not, uh, if you do not restart the application.

Thomas: there’s there’s like a server side reuse of ephemeral keys too, right? Like there’s a part somewhere where there’s like a hash table where they’re storing ephemeral keys? And if you just keep probing it, you can keep them alive. Like until you guys gave up, trying to keep them alive anymore is what I got the, the sense I got like just indefinitely?

Kenny: That’s great.

Kien: Um, I remember one day, uh, me and Mateo were just messing around with the protocol and we just sent, we were just sending some keys to the server, seeing how it would respond. And then at certain point we noticed, wait, this key that the server just sent us is the same as the one that we got before.

Right? So what happened? So we noticed that if you keep just using the same client ephemeral key, it will still use the same server ephemeral key and playing around with it. We, we thought maybe it’s sort of cache where it stores like a key’s server side as well.

Thomas: I wanna say like, real quick, and I’m getting us off, off track just for a second, but like, the vulnerabilities that you guys found in Threema are like, I would say impact wise, they’re less impactful than the vulnerabilities that were found in Matrix. Like the, the Matrix vulnerabilities were pretty devastating, right?

And these are just like, I’m, I’m using the word "just"— they’re, they’re really, they’re good vulnerabilities, right? But they’re not like the whole service, the premise of the services, you know whatever. But the paper itself that you guys were, I mean, both papers are great, but your paper is full of little wonderful bits like , the server keeping ephemeral keys forever, or the encrypted backups use WinZip and exploded. Like I would say that the Nebuchadnezzar paper, if they had found the WinZip thing, that would’ve been one of their 19 vulnerabilities. And you guys very tastefully put it off to the side or whatever. It’s a really good paper, right? Like, I think, you know, I wanna make sure that as we’re summarizing this for people, like go read the paper, the paper’s awesome.

Kenny: Thanks, Thomas. You know, we write these papers with you in mind, right? Because, you remember the Mega paper that we, wrote, uh, different. we we, we spotted how much you liked that, So we thought we should write another one just for you.

Thomas: It’s, it’s good to have a specific audience in

Kenny: Yeah. You’re our core market. Yeah, totally.

Thomas: I mean, back to what we’re talking about, right? So we’re talking about, essentially we have a vulnerability that factors the ephemeral key out of the, the handshake that you use to log into Threema, right? They might as well not have had the whole C2S protocol at this point. It might as well have been like a textbook, like, whatever, you just encrypt this key or whatever, right?

I guess I have two questions. Like, I have a technical question first of all, which is, am I crazy or are they just trying to reinvent an authenticated key exchange?

Okay. So

Kenny: You’re crazy. No, they are trying to reinvent the wheel here. So actually they’re, you know, well-established solutions that give you what you need, you know, TLS, Wireguard, pick your favorite some, some version of Noise, pick your favorite protocol, and use that. I will say in Threema’s defense that when they started doing this and developing Threema, it was way back, it was like a decade ago. And back then, you know, TLS was in a bit of a bad shape at that point. Right.

Deirdre: I was. gonna ask like, did you delve into sort of like the history of development of these protocols and the, I mean, you looked at the software as it is. I was trying to find a spec. All there is is like a 25 page, 30 page white paper. I was trying to find the new thing that they said that, oh, there’s a new thing coming out in a blog post, uh, in response to your publishing your paper.

I was like, okay, I would like to see the spec for that, please, and I cannot find it. But the, going into the development history about the software and the protocol, because I think they went in lockstep. It was not like, here we shall design a protocol and then we shall implement it after we have analyzed it.

I don’t think that’s what happened here. They. Threema’s been around for a long time and so they didn’t, maybe didn’t trust TLS and they were like, ah, we’ll just do this over here. Wireguard didn’t exist yet as a protocol. I don’t — Wireguard and Noise evolved a little bit together. Noise didn’t exist yet, so it’s a kind of an unfortunate set of choices that they kicked off development with back in the day, and they’ve had to live with them and they haven’t really updated them over their history? It feels

Matteo: You can actually see if you do some Threema archeology, you can see layers of patches on top of each other, like the fact that the metadata are not authenticated, but they’re put in a box that is in a separate message that you can strip. It was also confirmed by the team, this, this was an attempt to, they started without authenticated metadata and then they noticed that they wanted to, and so they tried to add this feature on top, but naturally the, the design was not very elegant in the end.

And the same is true I think for the nonces, the fact that to avoid reflection attack, they keep a nonce database where they send, they save every single nonce. That’s also like a sign of stratification over, over over the.

over the

Deirdre: Yeah. And that’s, that’s in the second threat model, right? That the nonce handling, I think, yeah, go ahead, Thomas.

Thomas: I was just gonna say, so, so what we’ve done here is we have a circumstance where we can potentially break the client to server protocol, right? A little bit theoretical, but whatever, right? So what is, what does it give us if we can compromise— so the, the premise of this service is that I’m doing end to end encryption with my counterparties, right?

So we have this whole E2E protocol where if I’m actually sending somebody a message, I’ve done a key exchange directly with them, right? I haven’t involved really the Threema server in that key exchange. So what do I get? How do I profit from breaking the C2S protocol?

Matteo: So one immediate profit is metadata, and that’s a pain point for all secure messaging as we have it now that the server can learn metadata. In this case, it’ll be the attacker who learns who you are talking to and when, because there are some IDs included in the cleartext metadata, and then the attacker can modify timestamps.

So can reorder messages, selectively, drop messages, and all, all of like the, I think what in our attack is, uh, attack three and four in the paper become open to you as a malicious server slash uh, C2S attacker.

Deirdre: If, uh, this is the model where you, the server is compromised, In theory, if you have an end to encrypted messenger, your messages are supposed to be secure even if the server is compromised or at least confidentiality, integrity and probably authenticity of your messages are preserved in the end-to-end encrypted messenger, even if the server is compromised. But we get attacks on those things. Yeah.

The message reordering deletion attack because of that nonce handling. Can you talk a little bit more about attack three?

Kenny: There isn’t much more to say about it. I dunno. Yeah, maybe you can explain exactly where it comes from. This kind of meta box stripping and, and metadata box stripping and so on.

Kien: Well, yeah, in general, I’m not sure if there’s much more to say because in general you have the message and then you have the metadata that has to be added for obvious reasons. So the source, destination, timestamp and other, and other things. And, uh, what they added on top is the metadata box, which is the encryption of a selected set of values, including the timestamp.

And the idea is that whenever you receive a message, if there is a metadata box, the values contained within the metadata box will override the values outside. So this could be interesting for some other reasons. You could also like, put fake values outside and put the correct values inside for some reason.

But in the end, uh, what, what really happens is that because this metadata box is not cryptographically bound, or its existence is not cryptographically bound to the rest of the message, you can just strip it off, set length of the metadata box to zero, and then just remove it.

Deirdre: Mm-hmm. I hate it.

Matteo: I think what was interesting was, uh, Thomas point about if you pull off a C2S attack, then all of these capabilities are things that you can get, even if you’re not a malicious server. But if you’re an external attacker,

Deirdre: mm-hmm. . I’m also before, before we completely move away from, uh, Compromised server, attack two is, uh, able to register the server’s public key as a user’s public key by tricking the victim into sending a carefully crafted message in the E2E protocol and enables it to permanently impersonate the victim?

This is a fun and, um, air quotes fun cross protocol attack between the C2S, aka, the TLS like client to server protocol, and the actual end to encrypted messaging protocol underneath it. That’s fun.

Kien: Yeah, that is definitely my favorite attack that, uh, we found. Yeah.

Thomas: is clearly my favorite attack that you guys, so we should, we should walk through this one.

So the first attack that we’re talking about, there’s a, there’s a sort of like hacked up, authenticated key exchange, where to finally authenticate the key exchange, you send this vouch box blob, which doesn’t have enough contributing information from the handshake and is replayable that, that’s an attack where you, you compromise an ephemeral key and you capture a vouch box and you just keep replaying the vouch box right? Here, am I wrong? What we’re doing here is we’re trying to build a new vouch box from scratch, and the way we’re gonna build the vouch box from scratch is we’re gonna play the client to server protocol off the end-to-end protocol.

Kenny: Yeah, that’s a way of putting it.

So absolutely we’re gonna use the end-to-end protocol to create for us the vouch box message, which is actually used in the C2S protocol. So we’re using one protocol to create a message for us that we use in the in the other. So it’s like a very classical cross protocol attack. And it comes about because of key reuse across the protocols because of lack of key separation and because we found a lot of computers lying around in the back of our lab that we, we, used to pull off some, I guess it’s called stunt cryptography.

Thomas: mean, we’re about to get to, what I’m gonna credibly argue is stunt cryptography. So like what are, what are the, what are the mechanics of this attack?

Kien: Well, the general idea of the attack is that recorded a C2S protocol, what you’re doing is, is as a client you’re saying, Hey, this is my ephemeral public key that I want to use. And then you create this vouch box, which is your encryption of this ephemera public key, right? And this is your encryption with your long-term key and the long-term key of the server, right?

But then whenever you’re doing end-to-end uh, messaging, you are encrypting something with your long-term key and the long term key of the other person. So assume that that other person somehow claims the server’s public key as theirs. Then you are encrypting something with your long term key and the long term key, the server, which is the same thing that you did for the client to server

Thomas: protocol

So the, the first problem here, right, just the first problem here would be what you just said is crazy, right? Like the idea that I’m going to encrypt a message to a counterparty in the end-to-end protocol, that happens to be the same key, like literally like the same Diffie-Hellman key exchange as the client server protocol.

That shouldn’t be possible, right? Like the, the protocol itself should make sure that you can’t mix up those two contexts of keys.

Kenny: Right.

Kien: absolutely

Thomas: Okay.

Matteo: Either by domain separation or key separation, but we have neither. Yes,

Thomas: Proceed

Kien: Yeah. and well after. So now you have the client, if fooled into thinking that the other person’s key is the key, the public key of the server, then, supposedly the client sends some particularly strange message to the, to that person. Then that value may be confused as a vouch box that can be used in the client to server protocol.

And what we did, we, we used our magical computing power that we have in the backyard to find a public kit as a specific shape. It starts with a one, it ends with a one, and in the middle there’s something that can be copy pasted. And the idea being that you can just say, Hey, can you copy paste this into Threema and send it to this account?

Right? And what internally happens is that the first one byte is put because it’s a text message. It’s just the type byte. And, uh, if you’re lucky, the PKCS7 padding that you have at the end happens to be a one, right? Because it’s actually a r, it, it’s actually random length padding. It’s used to hide the length of the message, but with some probability will be just a single one.

And now you have a magic 32 byte string that can be interpreted as an ephemeral public key of which we know the private key because we computed it that way.

So

Deirdre: I hate it.

Thomas: hold on, hold on, All right. All right. So like, ju just rewind a little bit here. Just to rewind a little bit here, right? Like, back in the day when Threema came out, I think the wrap on Threema was that it was a messenger that was kind of built out of sort of the design of NaCl. Like the high, the supposedly high level crypto—

Filippo Valsorda gets really upset when you call NaCl a high level cryptography library because it forces you to manage nonces. Um, filippo should back off on that, but anyways, so,

Threema was kind of like, as I remember it, it was kind of notoriously like, this is just NaCl cryptography, right? So like, don’t bother go looking for, don’t bother going to look for good crypto vulnerabilities here, cuz you’re just gonna wind up pen testing NaCl, which obviously turns out not to be the case.

Right? So NaCl uses salsa and you know, Poly-1305. Right? It’s a stream cipher. It’s an AEAD, like a classic, AEAD thing. Right.

So

Deirdre: Those primitives are good. Those are good

Thomas: and Threema

also is using like an AEAD cipher for this. It’s, it’s like a counter mode, at bottom, it’s counter mode. It’s a stream cipher. Right.

Kenny: It is just, uh, salsa 20 Poly 1305, right? There’s straight out of the, out of the

NaCl Library.

Thomas: I have to relate the experience of reading this paper. I think it’s, it’s, well, it’s very well done narratively, right? Because like you’re, you’re reading the paper and like the first thing you do in a paper life, this is kind of lay out how the protocol works. And I’m reading and I’m reading how everything seems normal.

And then you’re getting to the point where you’re describing how messages are encrypted and it’s like, and here we take the message and we PKCS7 pad the message, which is like a record scratch, like what

like.

Deirdre: Yeah. It’s like why are you padding the message plain

Thomas: So, so for people who don’t own the world’s largest collection of CBC padding Oracle attacks like I do, right? PKCS7 is the thing— in, in, in block cryptography, when you actually have blocks, like if you have a short block, you need to fill it out to the size of the block. And what you’d say is like, if I’m three bytes short the message is gonna end in 3, 3, 3, like those three bytes.

Or if it’s two bytes, short ends in two two. Or in your case, what you’re looking for is a message that ends in one. So it’s like come up with a message that’s one short, the pkcs padding will be one at the end, but like reading the paper, it’s like, this is Chekhov’s pkcs padding. Right. This has to, come back up somehow.

Like at the end of the play, the PKCS7 has fire. Right. Is there, is there a reason that you guys were able to discern why they’re doing this?

Kenny: I think as they were trying to, they were trying to use it as a length hiding mechanism to try and give them some kind of traffic confidentiality or, you know, anti, uh, what’s it called? Help me out here guys. Uh, when you disguise,

Matteo: land

hiding of, of the,

Kenny: prevent, to prevent traffic analysis, attack,

Deirdre: Yeah. Okay.

Kenny: So, so they weren’t using it in CBC mode because if they had, we would’ve found a padding oracle. Instead, they were using it directly on the, plaintext and then applying on top A, A good AEAD scheme.

Thomas: so they’re actually randomly padding at the end of all these messages.

Kenny: yeah. exactly.

Matteo: Yeah, exactly.

Thomas: more sense.

Kenny: Yeah.

Deirdre: This is, this is quite funny to me because they are trying to, they’re using something that they probably shouldn’t be using in this context to mitigate size, uh, analysis on the wire, and then later in the paper they’re like, oops, you did a compression in your backup mechanism, allowing a compression-based

Kenny: That’s very true. Very true. If they had removed that oh one at the end for the PKCS7 padding, actually our computational requirements would’ve been a factor of 256 lower right. We would’ve been able to forge the required public key faster, I think. Is that right? Kien’s nodding. It was

Kien: Yeah. 256

Matteo: Yeah,

Kien: times

Kenny: Faster.

So it did actually make our attack a little bit harder. So, you know, kudos to them. The random padding helped a little bit, but not quite enough.

Matteo: I think the part that really beat them was the, the fact that the crypto box, it’s a nice, clean, primitive, but does not take any context. Or any associated data. So like when you, when you’re using NaCl, you take a key, another key, then you put them in a box and then you use this box to encrypt and encrypt.

But this box doesn’t take context. That is something that we, we usually want in key derivation functions and doesn’t take associated data.

Deirdre: Yeah, that’s an interesting point because picking your primitives is very important when you’re trying to implement a specific… one, you’re trying to instantiate a protocol and then you’re trying to implement it with good implementations of those primitives. All of that is important, but it feels like a lot of the protocol decisions of Threema, the end-to-end part and the C2S, the client server part, were hamstrung by the API of NaCl, which is not to say that the API of NaCl is bad, like pointing back to Filippo and, and Thomas’s point. But if you’re designing— I feel like they were influenced by the library API they had, as opposed to trying to design what they wanted to have and then picking the libraries to fit that need.

Does that sound, that sound true?

Kenny: I think that makes a lot of sense. I mean, specifically with the, we already talked about the metadata being encrypted under its own invocation of the NaCl AE scheme. Uh, they used the same, no, they derived the key from the existing key to do that. So there was one key derivation step.

And then they use the same nonce I think across the two different, so you think you’ve got a nonce reuse vulnerability, but you haven’t actually, cuz the two keys are, are different. and there it would’ve been much nicer, I guess if the metadata could just have been incorporated as associated data and you had an AEAD kind of interface. But that’s, that’s not what NaCl provides. So indeed, uh, I think they were a bit hamstrung by that.

Thomas: So like that, this is the, the most complicated attack on the paper, right? But like the broad outline of the attack is, what we’re gonna do is we’re gonna simulate the client to server protocol using the end-to-end protocol. And the way we’re gonna do that is we’re going to come up with a counterparty on Threema, like a another person using Threema that happens to have the same public key as the server ,so that when we talk to that person, we’re encrypting messages to the server. And like the, the reason that’s gonna work, we need to, like, what we’re trying to do is to reconstitute a pretty precisely formatted value this vouch box, part of the, uh, the client server protocol, which has picky formatted, right?

And so what we need to do is send a message that when we encrypt that message is equivalent to the encrypted vouch box. I’m right so far.

Kenny: it. That’s it.

Thomas: And then inside of that encrypted vouch box there is, like, what’s in the vouch box is the client’s ephemeral key, right? The x value here, and what you have to spend a zillion, zillion cores on is coming up with

an ephemeral key that could have been encrypted there, that has the right format so that when you encrypt it in the end-to-end protocol, it’ll come out properly formatted for the vouch box. Uh, and that took you a zillion cores?

Kenny: But it’s a one-time, it’s a one-time computation though, right? So we burned 8,100 core days once. Actually we did it twice, but that’s because we made a mistake.

I’ve got, gotta embarrass my

co-authors

now. So,

David: It’s a

Kenny: It’s a constant factor until your code works properly. And, uh, kudos here to, to Kien and Matteo. They found this really neat trick to speed the whole thing up as well. Like we’ve got this order of magnitude improvement by doing some clever magic tricks with, uh, coordinate conversions and, and parallel inversions. All kinds of really, there’s an entire paper actually just to be written how we did that computation, or at least the blog post. and I’m putting Kien on the spot, I think. Where, where’s that blog post, Kien?

Ken?

Kien: Well, it will be on my blog, in, uh, soon™

Kenny: Okay.

Kien: whenever I get to write it. But, uh, but yeah, so in general, like the, the, yeah, the trick is mostly to just generate many private keys and then check the public key, and then what you would like to do is, you’re generating this fast enough so that you can get to the, choose the 50 keys that we wanted.

And, um, we did it by, you know, doing, starting from a private key and then point doing point addition on public key. And then you take into account how many times you are doing the point addition so that you can keep track of the private key. And the thing that Kenny was referring to earlier about, uh, you know, having to do the computation twice, is that at a certain point I swapped two variables and then, uh, we had a nice public key, but we didn’t have the corresponding private key, uh, because could not be

reconstructed.

Quite unfortunate, but

Kenny: yeah.

there’s a fun wrinkle here, if I can just jump in for a sec. Um, it turns out, you have to claim the server’s public key as your own for this attack to work out, right, Paul? The attacker has to do that. But how, how can you do that without knowing the private key? Surely you have to prove knowledge of the private key before you can grab the public key. Well, we found an API that allowed you to register public keys without, without proving that you knew their

Deirdre: Ah,

Kenny: private key. And also it turns out you don’t have to register exactly the server’s public key. You can add a point of low order to the server’s public key,

because we’re not on a prime order curve.

We’re on curve25519

Kien: yeah.

Deirdre: this is why we need prime order groups. And

Kenny: over, co-factors all over again. So, you

know, I guess one key take—, we’ve, we’ve talked about some of the deficiencies in, um, in NaCl. I’d be so brave as to call them deficiencies. There are also issues, you know, around 25519 and

Deirdre: Oh

Kenny: I can’t think here what the common factor is, but there’s some common factor linking all of these things.

Thomas: So like the this is, this is great, right? Because like tell me I’m wrong about this, but like the common setting for co-factor problems with like, you know, like the reason we need Ristretto is like mostly signing settings, right? Like it’s mostly an issue of being able to create multiple different ciphertexts with the same whatever, right?

With different keys or whatever. Right? But this is not the signing context. This is like it’s used in an actual key exchange where the co-factor actually was practically useful for an attack.

Kenny: Yeah, that’s very, that’s very true. But the co-factor was only useful because, uh, Threema blocked our registered key. So then we went ahead and reregistered one of the offset keys, you know, offset by a small a point of small order. So we enabled us to reanimate the attack, once, uh, once we had told Threema what we had done.

Deirdre: Love

it.

Kien: Yeah, they might not have liked our, the name of our, the account that we put there. So

Thomas: Oh, okay. Well, you should, you should share

Kien: yeah. Yeah. So we, we call the account, uh, L Y T A A A S, which stands for "lose your threema account as a service."

Thomas: Well, they don’t know that that’s what it means, unless you like put

Kien: Uh, well, we’ve brought it in the paper, so

Thomas: So like this attack is either sinking in with me or I’m confusing myself, but like my, like the big thing I wanna determine is, is this attack necessary given attack number one? How, so? Hold on. You’re, you’re, you’re, you’re generating your own ephemeral key at this point. And you’re sending, you’re sending, a vouch box with just an X key that’s unrelated to like the actual authentic user’s X key. So is this a variant of that first attack where I don’t need to steal an ephemeral key from the person?

Kenny: That’s the, that’s the point. Yeah. So the first attack we regarded is a little bit more theoretical cause we didn’t give a mechanism by which she could grab the ephemeral private key. and this attack 2, uh, side steps that.

Deirdre: Mm-hmm.

Thomas: Okay. I take, I take it back. This is not stunt cryptography. It’s, wait, this is totally stunt cryptography, but, I guess you really do need to jump the motorcycle into the helicopter in this case. So, uh, I, I’ll, I’ll give it to you.

Kenny: Okay. We’ll try to do better next time and make it more stunty for you

Thomas: I dunno how much stuntier you can really get at

Kenny: wouldn’t have a bunch of things in the pipeline you wouldn’t believe. Uh, I, wish I I wish I could tell you.

David: Thomas, do you think you could define and then maybe give us a ranking of the various, uh, uh, of stunt cryptography and

Thomas: I mean, DROWN is the canonical, DROWN is the canonical, like, you know, new variants of Bleichenbacher like padding and stuff like that. That’s definitely, I, I think that’s the gold standard. But this, getting somebody to send an instant message that contains a public key that gets formatted in part by PKCS7 to an encrypted version of that public key is definitely, it’s like, yeah, it’s, it’s Fast and the Furious shit right there.

Deirdre: It’s Fonzi motorcycle over shark, but not on fire yet.

David: I just wanna point out there is a variant of DROWN that just used an implementation bug in ssl, in open SSL that let you avoid having to learn all of that Bleichenbacher crap. , um, which is the part that I contributed.

Kenny: Hmm.

David: Whereas, uh, people much smarter than me and Maria Avara came up with the Bleichenbacher stuff

Bocker.

Thomas: okay. There, there, there, there’s more for us to talk about here. Right? So we should probably move on to the next setting of attacks here. we were talking earlier about the ability to reorder and suppress messages because they have metadata that would let ‘em resolve the, the ordering of messages, but they, they didn’t authenticate it so an attacker

can strip

Deirdre: Which is also like a hard problem for implementers of end-to-end secure messengers. Like it’s a thing that you don’t usually think that much about when you’re like, ah, I’m going to ratchet and I’m going to keep these you know keys preserved and I care about all these things like, ordering of messages, which is handled, you know, the server’s supposed to ferry them and they have to deliver them, but you also have to have a little bit of cryptographic guarantees that, you know, you can detect that these, uh, messages are giving, being handed to you in the right order and you can do things chained in the right order. And these set of vulnerabilities is definitely like how this can go wrong and not just in the like delivery side of it.

So I was very interested in these

Thomas: I, I said somewhere else that like, I, I keep saying that the Matrix vulnerabilities were probably more devastating than these vulnerabilities, but like that’s true. But like the outcome of this paper is that Threema looks like 50 times weirder than Matrix does. Right. And, and there’s a vulnerability in that setting that kind of, you know, nails that down for me. Right.

So there’s a point at which if you reinstall Threema, you lose the collection of all previous nonces that, that were used, that you somehow needed to keep. Can, can we get into why we’re keeping a, like a, a collection of nonces?

Kenny: Because you choose them randomly for every message that you send. So if you want to avoid replays, you better store all the nonce you ever saw. If also, if you wanna avoid a nun reuse vulnerability, you better store all the nonce you ever generated and make sure you never use one twice. And also, if you care about reflection attacks, you better keep track of all the nonce you ever sent and you ever received. That’s the rationale. Basically,

Thomas: Is this a consequence of, of the E2E protocols, like the key exchange there and the way key derivation works? Or am I, am I just like not noticing all the other protocols that don’t have the big collection of nonces that they’ve sent?

Kenny: this this protocol’s outstanding in terms of the number of nonces that it needs. And, and it keeps, and, uh, I guess both other protocols settled for using something like a counter and, you know, maybe keeping track of the last counter value. Like that’s like what the TLS record protocol does, right?

Essentially. And if you have a protocol running over tcp some kind of reliable transport, then that’s more than enough because hey, if, if things start arriving out of order or the the nonces are all wrong, then you’re probably under attack. So it’s probably a good idea to terminate the connection right there. So, yeah, it, it’s not easy to see why it was done that way.

Thomas: Gotcha. Okay. But the, the net net of this is, is straightforward, right? Like, it’s basically we’re we’re recapitulating the, the previous attack where we could just strip off the metadata and lose ordering. Here it’s like, okay, if you, if you reinstall, you lose your, you know, your vault of previous nonces, and then you have no way of distinguishing whether something has been replayed or not because you lost the nonce.

knots.

Kenny: Exactly. Yep.

Matteo: Don’t, don’t lose the nonces

Thomas: they should back them up in the JSON.

Jason.

Matteo: Oh, ouch.

Kien: o. Yeah. It’s also to be noticed that, uh, introducing nonces straightforwardly also puts another problem that the Threema people actually noticed that you could then sort of get two nonce databases and then see sort of that there’s, there’s some correlation between the nonces. Maybe there’s the nonce here, there’s the same nonce at the same point.

So they, they can check the, if they are communicating in some way. So what they do is they hash the nonce, nonce, in, um, in the adr, like 24 bytes long, but hashing them, with SHA256, brings them to 32, and they don’t strip them off. So that’s, that’s eight, eight bytes more in your storage.

David: Wouldn’t the same two nonces have the same hash, or are they also salting

Kien: Uh, they’re salted with the identity of the, of the device owner.

Deirdre: Okay. So Signal doesn’t have this nonce storage problem because it uses a, a proper aead with additional data, basically, or what?

Matteo: Yeah, pro proper counters I think is, is what they do, right? And mo, most protocols that do.

Deirdre: Yeah,

Kenny: you might, you might use a

counter on a sliding window, for example, if you’re DTLS or something, Right?

Yeah.

Kien: But Signal also uses, like, because of the ratcheting, you have a different key for every message, so you don’t really care about the nonce actually, right.

Deirdre: the Threema is long term Diffie Hellman with your party at the other end, and so nonces really matter. Whereas with the ratcheting, the double ratchet, every message is a different from different, Diffie Hellman. And so you don’t even care that much about nonces because even if you haven’t received another message from the other side in, in a while, you’re doing the other side of the ratchet and so you don’t worry so much about that.

Kenny: Exactly. You could even set the nonce just to be the all zero string in, in Signal. As long as you only ever use each key once. Right.

Deirdre: Yeah. Yeah. I love that. Double ratchet. It’s so nice for stuff like this. Okay, so that’s attack four and attack five, the Kompromat—

Thomas: Wait, wait, wait, wait, wait, Wait, wait, wait,

It’s been made clear to me that you guys have no business talking about the kompromat attack because it was, it doesn’t matter because it was previously reported.

Kenny: Yeah. But We just thought it was fun. We thought it was a nice gun. Maybe not as classy as the other one, but still, you know, we’re just trying to put content in there for you, Thomas.

Thomas: Okay, go on.

Deirdre: Convergent Evolution,

Kenny: So basically there’s, there’s a registration protocol and there’s the end-to-end protocol and they interact with each other. They shouldn’t interact with each other, but we found ways to make them interact. so in the registration protocol, you have to prove knowledge of your private key. And at that point, what you do is you encrypt a message to, uh, a long-term Diffie-Hellman Diffie-Hellman value chosen by the server. Well, if the server chooses that long-term value to be Bob’s public key, guess what? You just created an end-to-end message that looks like it came from Alice going to bob.

Deirdre: Awesome.

Kenny: And somebody else notified them of that bug, uh, in 2021, and they fixed it by doing better domain separation.

Deirdre: Hey, another, another nice place where domain separation comes in handy. This is funny because I’ve heard a lot of discussion lately about, do we care about deniability in our end-to-end encrypted messaging protocols? Do we care about the cryptographic ability to say, you can’t prove that I sent that message under my public-private key pair.

And this seems to be like not the same setting where we care about when people talk about deniability, but it’s a very related setting of people being like, haha, like you just did that thing. And so maybe if the, like signing and sending of messages in this, in this, uh, end-to-end encrypted messaging protocol had deniability this attack would be less devastating or something?

What do you think

Kien: Well, in general, the, the, the thing is there is some deniability, because I mean, the, the messages are not signed. They are just, they, they have the MAC that is from the AEADs. Uh, so sort of either party could have, uh, created a message of course, but the the point being here, that is the, is the server that it’s inserting a new message and you don’t expect that.

Right? So if you receive a message from Alice then, and you know you did, you didn’t create that message, then it must come from Alice. But it’s actually

Deirdre: server. Okay. Got it. Fun .

David: How much do we care about deniability as a property, right? Like I feel like it showed up in O T

Thomas: It it turn, yeah. It turns out like a lot, right? Like this came up during the last election, right? Where like people were using the, I forget the, I don’t, I don’t take SMTP

Deirdre: the 2016 election or the 2020

election

Thomas: the, stupid keys

David: the email, the Hunter Biden, uh, email keys. Oh God. I wrote a paper about this once, I already forgot what they’re called.

Thomas: DKIM DKIM Dekin D yeah D yeah.

DKIM

keys. And then also just if, if you’re trying to come up like a coherent model for how these, like what security these things provide, like non deniability is a, it’s a, it’s a thing you’re conceding to your attackers that you don’t need it’s information or whatever that you’re giving them that you don’t need to give them.

Deirdre: Hmm. Okay.

Thomas: there wasn’t so much of a discussion there so much as me jumping in telling you guys what the right

Deirdre: But it’s definitely like, it’s come up when we’re trying, you know, uh, one of the things that we can discuss as we, uh, wrap up is standardization of protocols. And part of the reason that the Threema, these multiple protocols in the Threema service, uh, existed because there was nothing to just grab off the shelf.

You, you could argue that the, the client server protocol, you could have grabbed TLS, but we, there might have been reasons that the people were less trusting of, uh, we didn’t have TLS 1.3 when, uh, Threema got off the ground, so maybe they weren’t super jazzed about that. But when we’re developing protocols nowadays, Signal has put a premium on non-deniability and or rather deniability, that kind of comes from the OTR and all that sort of stuff. There have been discussions about do we care about this? Do we really care about being able to go to court and say, I, you cannot prove that I sent that message. And apparently like, we should keep talking about it because at least with DKIM we kind of do care about it?

but anyway.

Last set of attacks is literally, I have your phone, I have one of the ends of the end to end messaging protocol. Say someone took your phone and was, you don’t have a good secure lock code and they were able to get over it, or you were compelled to unlock the device and unlock Threema, what does that let you do with Threema?

Can you talk about attacks six and seven? We already mentioned a little bit about seven

Kien: Well, so basically for attack six, the idea is that if you have an unlocked phone, and, uh, I mean this is reasonable because maybe you’re in the house, you just left your phone for a few, for one minute there unlocked, then someone else could just take it, press just a few buttons and the app will ask, okay?

Choose a password, and then it must be any random password that you can choose, and it will just give you your secret key, your long term secret, encrypted with that password, which means that you just choose a password of all zeros. That’s my favorite password when I was testing Threema. And then you just get, uh, a nice QR code, then you can just take a, take a photo of, and then you just relock the phone and then just leave it there.

And then the other person will never notice that it has been cloned, because if the attacker is smart, there is, there are ways not to be detected. And then you, you are just cloning the entire, uh, the entire application. You can just authenticate as the user, you can delete messages because if you acknowledge a message to the server, then the server will not try to send it again to the correct user in the next time, right?

So there are, and of course then you can read all of the end-to-end encrypted messages that, uh, that are there. So if you can mix it with the compromised server model, then you can just take all of the just end-to-end encrypted messages and decrypt them. So yeah, because of the lack of forward secrecy, there’s no more confidentiality there.

Deirdre: I hate that. So this is part of a feature that’s supposed to let you move things around between devices or back things up. And what it lets you do is just, if I were an evil maid or something like that, I could just be like, Ooh, I have the capability without you ever noticing into the future that I have, uh, been able to extract everything and get your key and, and unlock all of that sort of stuff.

I

hate

Kenny: Exactly. And this somehow, like what does it mean? What, what do we call that in cryptography when I compromise you now and you cannot recover in the future? You might think that’s called forward security, but it’s not. It’s called post compromise security. Right? Or backwards security used to be called classically.

So like this is a, a violation of a pretty desirable security goal. And you know, we were, we were critiqued a little bit in public for like, if you own the phone then you’ve lost all security completely. What, how can you hope to, you know, get anything

Deirdre: Well,

Kenny: and Yeah. But the point is that now we can just in a few seconds extract the key piece of information, your long-term private key that enables you to be eavesdropped upon forever.

Deirdre: Yeah, because to compare to Signal, the, the ratcheting allows a, not just the forward security, but the post-compromised security. So that, yes, if you were an evil maid and you were, had access to an unlocked phone with Signal, someone else on my, on my, other device that say paired, I could re-key and the adversary does not have access to my new stuff going into the future because I have re-keyed and I have told all of my friends new phone, ignore other phone, and the adversary will not have access to new messages going into the future.

So there is a, there is a difference, uh, in the protocol of what access to one of the ends gives you into the future, uh, and it

Kenny: Yeah, that’s absolutely spot on. Uh, you’ve, you’ve put it So much more clearly than we ever managed when we were talking to, talking to journalists about this, right? Yes. Hard, hard point to get across.

Deirdre: And then the last one that we, we touched on, uh, briefly

Thomas: The Jackie Chan version of the previous vulnerability.

Deirdre: uh, is, is just compression in the backup system. And all of these end-to-end secure, uh, messengers seem to evolve a backup system in some way. And that’s because people lo lose their devices. I dropped my phone in the toilet, or I left my laptop at the, at the airport or something. Having some sort of secure backup tends to happen to evolve in these systems that involve real humans using real devices, but it involves compression before encrypting.

And it reminds me of something called CRIME. So can you tell me about the number seven?

Kien: Yeah, so the idea is that there is the service that they call Threema Safe, and it’s their way of doing cloud backups. So you just upload your data to the cloud and then whenever you want to restore your account from another device, for example, you just say, Hey, this was my old ID and this is a password that I can use to, to decrypt a backup.

And then the server will send you back the backup and then you can just restore your account. And the way this is done is that periodically your client composes this JSON with your id, your private key in base64, and then some other information. But most remarkably it also adds your contacts, including their nicknames and

Deirdre: contacts,

Kien: your contacts.

Yes, exactly. So you have, uh, nicknames and IDs and uh, interestingly enough, the nickname can be changed. So an attacker can just change their nickname and they can influence the content of the backup. So you influence the content of backup. You have compression, you have all the pieces that you can put together to put this, uh, this attack that is very similar to CRIME.

And so the idea is that you just change the nickname very slowly, every single time. And then you just leak the private key piece by piece base64 character at a

Thomas: It’s really, it’s like, it’s remarkably similar to CRIME in, in the way it lays out, right? Because ultimately there’s a single JSON and that JSON blob somewhere earlier in the JSON blob has a secret, just like on an HTP transcript earlier in the HTTP transcript, there’s a cookie, right? But then we control plaintext in that same JSON blob, and we can just vary it to trigger compression at different places and kind of do a byte by byte attack on the previous secret

Matteo: And the cherry on top was seeing Juliano Rizzo and Thai Duong, the original ninjas discussed this on Twitter. They were like, oh yeah, this is CRIME. And we are like, yeah, yeah, it’s CRIME

Deirdre: Awesome.

Thomas: a, it’s a place where it almost seems like it was more work for them to line this up, just so, so that you could replicate CRIME in this JSON blob as opposed to just like, you know, they use WinZip already, right? Just have two files or something like that. Right. I, I don’t know, but the, the second best attack on the paper, clearly.

Deirdre: Yeah.

David: So how did Threema react to this, and how did you disclose to them the

Thomas: Oh, quite

well.

David: if at

all?

Kenny: So initially actually things, things were great, we, we sent them an email saying, Hey, here’s our paper, here’s everything we found, here’s like a user’s guides, let’s talk. and and in that initial email we, we talked about, you know, 90 days industry standard and yeah, we had, uh, a meeting. So actually it turns out that their HQ is a half hour train right away from, from Zurich, uh, down the lake.

So we

David: Uh, must be

Kenny: Yeah, we went down to Pfäffikon to, to visit their hq. They gave us a guided tour of their office building. I’m not sure why cuz it’s just an office building. But anyway, and we sat down with them and we had a very serious discussion about the vulnerabilities we found and what their remediation plan was and what our publication plans were.

It was all, it was all good. And much later in the process they actually put out some, some fixes to some of the attacks in new releases of their code and. They gave us credit on their, uh, change log effectively for doing that, which was great. And then they also informed us that they were gonna roll out their new IBEX protocol, which is the forward, secure kind of layer on top of the end-to-end protocol, which full disclosure we’ve not looked at. We don’t know if it’s

Deirdre: Yeah, I can’t find any, uh, details about it yet,

Kenny: I think. Uh, you would need to, yeah. I mean, you can look at their client’s source code and figure out what’s going on if you have the, the wherewithal to do that, the patience to do it too. And so that actually rolling out the forward secure, um, aspect actually also remediated some of our attacks.

So like, things like the reordering attacks would no longer work against in the, in the, uh, compromised server model. So that was all good. And, uh, then we, you know, came to the kind of agreed disclosure date of the, of the ninth and we put our paper out. We built a bit, a little bit of a website and they knew we were gonna do that and we also knew that they were gonna have their blog post.

And, uh, we were actually also contacted by a journalist, Lucas Meder, who works for the. , which is like a very heavyweight Swiss newspaper, very widely read in kind of government and business circles. And so we agreed to, to talk to him. And, um, Threema also talked, talked to him. So he wrote a nice article about the work really aimed, uh, Swiss German audience.

So yeah, up to that point, everything was great.

Deirdre: up to that point.

Kenny: Threema tweet Yeah, so we did a little, we did a little tweet thread early on the Monday morning, so it was the 9th of January and you know, the newspaper is now on the news stands. I I actually bought a copy from my local news agent and it’s slightly disappointed to find us not on the front page, but I guess, you know, other things are happening in the world, so, fair enough.

Um,

Kien: it was the Bolsonaro thing as well, so that’s still our front

Kenny: Yeah. Yeah. So we ended up somewhere in the second, the economics and science section or something of the newspaper. and we did a little tweet thread about it but then actually, uh, Threema put out a tweet that was, we found a little bit dismissive. And it was a little bit disappointing given, um, you know, how, how we talked to them before.

I mean, we knew they were, of course, they were not super happy about our, our attacks. They were not super happy about some aspects of the way we had worded things in our paper. but then again, you know, we’re, we’re academics. We write papers, you know, that’s what we do, right? So we’re not gonna be all sweetness and light in our, in our research paper.

We’re going to say it as we see it. And, you know, I think they found that difficult. I can understand why, you know, they’re used by the Swiss government and the Swiss Army. There are commercial considerations in play. It can’t have been easy on, on their side to see their, their baby being, you know, murdered.

even if they were able to reanimate it by, you know, doing all the appropriate fixes. So could have said, . We’re really grateful to the team, and we think this is fantastic research. We’ve updated the protocol, it’s now as secure as we can make it. We welcome further analysis, blah, blah, blah, blah.

Right? Something like that would’ve been great. But instead they said that our claims were overblown. And "it was really sad to see how even students now are forced to overexaggerate the importance of their work. And then like, see our blog for more details." And initially we didn’t really respond to that other quote on our behalf.

Um, I, I actually did retweet it and say I was a bit disappointed with that, that way of covering our work and that’s fine. Right. Um, the, the court of public opinion had a lot to say about it.

Deirdre: Yeah. it’s a bit unfortunate because any free analysis of your, uh, software or your, you know, cryptographic protocol is in helping to make it better is always welcome. You’re always trying to do the best thing for your users, and to, to do good work. Um, so it, it could have easily been a win-win of, these things have been found and remediated and also with these findings in mind, here’s our new forward secure protocol and we’re rolling that out too, and everyone is happy together and we’re improving the security of our users, but they kinda, they kind of punched down on you and be like, these are overblown because we rolled out this thing, which was not the thing that you analyzed that only just got released and did a bit of a shell game.

That was unfortunate. I’ll just say

Kenny: Unfortunately for them though not, not for us, right? Because everybody who’s in the know could see exactly what was going on there, so Yeah.

Deirdre: Yeah. I am happy. I would like to see more about their IBEX protocol. I would like to see a more in-depth at least description of the protocol because they don’t have one right now. I don’t have all the time in the world to go s splunking in all of their code bases to see how it works, but, um, I’m happy that they are improving their protocol to be more forward, secure.

And, uh, good job on the initial, uh, responsible disclosure interaction with, uh, good faith researchers who came to you with results. Awesome.

Matteo: That was also interesting because we, we could see that they cared about their users. They were super responsive. They like answered to our email, like within a few hours of having sent it. And then we met with them, but then they do clearly missed crypto people in their team. So I think they needed more of that to also fully understand the implications from an academic standpoint of our findings.

Deirdre: Mm-hmm. .And of course, like even if any one of these attacks might seem— like the stunt crypto of like, ah, I’m gonna throw 8,000 core hours of compute to get just the right key for this thing. As we’ve discussed before, these things have a way of layering and interacting and like one chink over here, another chink over there, and all of a sudden you can have something much more more serious than any one of its parts. So you, you generally just wanna kind of take all those things together and not just dismiss any one of them as like, ah, well you can’t do much with this one thing. It’s like, yeah, well, you don’t wanna leave any of those chinks because sometimes you can just chain them together.

Kenny: One thing to say Deirdre is that sometimes, I think you saw also with the Matrix vulnerability disclosure, that there’s more than one force operating within an organization. So all of our interactions with the technical people there were great, but then something went wrong in the communication towards the end.

And you know, we had a different outcome. And I think that also more or less happened with the Matrix case. I think the C E O there went off the range a little bit and yeah. can happen.

Deirdre: Yeah.

Thomas: I’m, I’m less sure that’s the case with Matrix, even though I have a higher, a higher regard for what Matrix is trying to do.

So at any rate, just a fantastic paper. Right? As always, Kenny, the James Cameron of these kinds of papers, this is a all fantastic stuff.

You guys

Kenny: I’ll take that. I’ll take that.

Deirdre: Don’t bet against Jim Cameron. Don’t bet against a Kenny Patterson Paper

Thomas: great work. The whole team. I mean, for, for a master’s thesis, what a great thing to come up with. It’s just awesome. Just

Deirdre: Oh yeah. Do we have any lessons learned out of this research? I think one of them is sort of. Well, you can’t go back in time and say you should have used XYZ protocol before it was really available. But going into the future, there is some work to try and standardize messaging protocols. We, we have TLS 1.3, so if you were trying to build Threema nowadays, you probably wouldn’t use a a C2S client to server protocol.

You’d use TLS 1.3 instead, or Wireguard, or you know, Noise or whatever. But besides that, like we’ve done a lot of talk about formal methods and modeling and things like that as opposed to, software, kI is kind of like tending to a garden that changes and grows over time. But cryptographic protocols don’t really seem to do well when they kind of grow the same way.

So any takeaways in that sort of realm?

Matteo: Well, I, I guess there is something to be said about this proactive versus reactive design of cryptography protocols. Like I think the sort of conclusion we have come to is it’s better to build a more limited protocol like Wireguard style, and then when it breaks, you tear it all down, and then you change it rather than trying to update things and, and have this sort of stratification for which you will have to pay the price eventually.

Deirdre: hmm. And if you do have multiple interacting protocols, because if you have a messenger, you’ll probably have an end-to-end uh, uh, backup solution, such and such, domain separation. Keeping keys, you might use the same parameters like an Ed25519 or a curve25519 curve, but you should not have keys that are used in one protocol, be also usable in another protocol.

And this is both enforceable at a protocol level, but also at a software level. And you should be able to never the twain shall meet because we saw problems with that Matrix and we seen problems with that in Threema. Uh, that’s a big lesson that we keep learning over and over again.

Matteo: In terms of formal models, like if you had tried to model something like that in, in a symbolic verifier, you would’ve the symbolic verifier scream at you, oh, I can do this and this, and this, and this. So it’s, there, we suspect it’s, it’s a tool that would’ve worked. We, we only tried like a very small model, symbolic model, but we are not expert in that field. But we, we, we think that that could have helped.

Deirdre: Cool.

Kenny: I have one more lesson. If your core business is cryptography, employ some damn cryptographers.

Deirdre: Hmm

Kenny: I mean, okay. There’s a lot of small companies out there that are trying to innovate and do things with crypto. Any medium size company now really needs to. If it’s part of really the core of what you’re doing, then either employ some permanently or maybe get some consultants in, to help you out.

Because you know, with crypto, like we know it’s a complete footgun and every single bit matters. That’s why I love it so much, right? That’s where these stunt attacks come from, from these weird, small, little flaws mounting up and causing problems. And one of the issues is though a lot of cryptographers these days are really more theoretically oriented, I’d love to see more groups doing the kind of work that we do. There’s a few really great other groups doing this, of course, you’ve had them on your podcast. But I think we need more of this, not less.

Deirdre: Yeah, I would agree.

Thomas: Could not possibly agree more. Well, we will lead you guys to your happy weekend, your Recette and your kche, and thank you so much for taking the time.

Deirdre: Kenny, Matteo, Kien, thank you very much and thank you for your paper. It’s very fun.

David: Thank you for keeping the security podcasting industry alive.

Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek and David Adrian. Our editor is Netty Smith. You can find the podcast on Twitter @scwpod and the hosts on Twitter @durumcrustulum, @tqbf, and @davidcadrian. You can buy merchandise at merch.securitycryptographywhatever.com.

Thank you for listening.