Signal's Post-Quantum PQXDH, Same-Origin Policy, E2EE in the Browser Revisited

Signal's Post-Quantum PQXDH, Same-Origin Policy, E2EE in the Browser Revisited

We’re back! Signal rolled out a protocol change to be post-quantum resilient! Someone was caught intercepting Jabber TLS via certificate transparency! Was the same-origin policy in web browers just a dirty hack all along? Plus secure message format formalisms, and even more beating of the dead horse that is E2EE in the browser.

Links:


This transcript has been edited for length and clarity.

David: So when I was in St. Louis, I encountered somebody who was like, oh, you’re David. You’re from “Security! Cryptography! Whatever!” And I was like, yes, that’s me. So, Deirdre, although you bailed on today’s intro, you’ll be happy to know that your intro is having an effect on attendees of strange loop.

Thomas: Is this, like, ‘This American Life’? What the hell was that? Say the words!

David: Hello, and welcome to Security Cryptography. Whatever. I’m David.

Deirdre: I’m Deirdre.

Thomas: I’m disgusted.

Deirdre: And today it’s just us. We all have a special guest. We just had some time and some stuff we wanted to talk about. And so today, we’re going to start with Jabber Ru. I don’t even know what this is.

Thomas: Wait, do you not know the story?

Deirdre: I’m assuming it has something to do with Russia and has something to do with the Jabber texting protocol. And that’s all I know.

Thomas: All right, so there are a series of I’m going to get some of this wrong too, right? But there’s a series of XMPP dead ender, not real encrypted chat servers running on Linode and Hexner in Europe. And the operators of those servers, one of them noticed that when they connected to the server on the encrypted, the TLS encrypted Jabber report that they were getting certificate errors, and they tracked it down. And if you look at certificate transparency, they had extra certs issued for their domain, and then they were able to, with just basic network diagnostics, show that there was a machine interposed between them and the network that was only picking up the encrypted Jabber ports and forwarding everything else directly over. Yeah, that’s a story. And the conclusion of the story is any Jabber communications on these servers should not be trusted. We think this is German lawful intercept.

Deirdre: German.

Thomas: I think it’s because Hessner is in Germany. I don’t know if there’s a logic beyond the fact that the governing law would be in Germany, but I think the idea is that you say Jabber and my brain turns off. But I think the subtext here is that the only reason that people use Jabber is to trade hostages. It’s like, for kidnappings.

Deirdre: Interesting.

Thomas: Yeah.

David: You’d think they’d at least use OTR on top of Jabber then, but I guess not.

Deirdre: Or Telegram’s shitty DMs.

Thomas: Well, if you’d used OTR, maybe you don’t care that much as long as you’ve verified keys on both sides. Right. But if you haven’t verified OTR keys, which I guess is easy not to do, it’s been a long time since I’ve used OTR, but if you’re not verifying keys, then people will just mad in the middle of that as well.

David: I’m with you that people don’t verify keys, but I feel like people that use OTR probably verify keys, but I don’t know. It’s been a while. I gave up on OTR for Imessage, so that tells you what my threat model is.

Thomas: They have other ways of enforcing message security than technology.

David: I was just excited to see that certificate transparency cost.

Deirdre: Yeah.

Thomas: So, first of all, it did not.

David: Well, sort of. Okay. So it is very difficult to use CT to catch things at the time. Catching it after the Faf is actually like yes, it is. You tell me if you know when all of your Certs are being issued and why and by which systems.

Thomas: We have a security team, and I think maybe one of the literal first projects that that security team had was setting up CT monitoring for our domains. Okay. Yes.

David: But once you get told that there’s a let’s Encrypt Cert for your domain, can you actually confirm that you issued?

Thomas: Confirm? I think we would notice if there was an out of process issuance, we wouldn’t then immediately know that Germany was but we’d have, like, a start. We’d go investigate if we saw auto process issuance.

David: My claim is that it actually can be hard to identify what’s an auto process issuance, especially if you’re a large organization. That being said, if you are offering an encrypted chat service, in this case.

Thomas: No one was looking until after they noticed they were getting certificate errors. The funniest thing by far of this whole thing is that whatever government did the lawful intercept here, and we assume it’s lawful because there’s enough network diagnostics to know that Hetzner and Linode both did weird network shit to make this work. Right.

Deirdre: Okay.

Thomas: So whoever did this, either both of those hosting providers both Akamai and got owned up, or they’re both complying with lawful court orders.

Deirdre: Right.

David: Did they go back and look and try and detect a BGP hijack or something like that? Because that should be detectable as well.

Thomas: Yeah, it wouldn’t be detectable as BGP if you actually got Hetzner to kind of suborn your connection. Right. It’s all within Hessner. They’re all, like, looking at trace route hops and things like that. It’s all pretty rudimentary. But obviously there’s strong evidence that something happened inside of Hetzner and something happened inside of Linode.

David: Right.

Thomas: But the funniest thing here is the most logical assumption here is government lawful intercept, and they let the certificates expired, and the post is like, this is slapdash. Their OpSec is bad. The reality is they don’t give a fuck.

David: Yeah.

Thomas: They got what they wanted.

David: Hilarious. Actually, what this goes to show is we need more cert errors, apparently.

Thomas: And louder, apparently. What this goes to shows is that what we need is DNSSEC.

Deirdre: Oh, God.

David: Because if there’s one thing that if you can’t detect it in CT, you definitely will somehow detect it when any DNS server in between you and them changes their like, there’s, like, a whole.

Thomas: CAA record thing going on right now. It’s like, if you had CAA configured, then let’s Encrypt wouldn’t have issued the certs. And it’s like, if it’s a government, I’m pretty sure that Let’s Encrypt was not the bottleneck here. Like any number of other, but like, whatever. And then there’s also, like DANE would have like you would have controlled all the issuance for your domain or whatever because and it’s a little interesting, right? Because the servers here are like XMPP and Jabber Ru, and I’m reasonably sure that Germany can’t get Ru servers to issue bogus records.

Deirdre: Yeah, I don’t know who is issuing Rus.

David: I thought Let’s Encrypt stopped issuing for Ru domains.

Deirdre: I wouldn’t be surprised.

David: Yeah, I don’t remember what their sanctions compliance was. I could be mistaken. Maybe they restarted. Maybe they never changed that they definitely stopped issuing for some domains, I thought, in Russia. Also, fun fact, if you’re a CA, you’re not supposed to issue to any domain on the sanctions list and you have to fetch that list every now and then so that you don’t accidentally issue a cert to the Taliban. A thing that definitely happened pretty early on by one CA that I’m not going to name, but you can infer.

Deirdre: Like in 2001 or 2015. Oh, boy.

David: Anyway, you don’t really get in trouble. You just get like, hey, don’t do that. And then you stop doing that, and then it’s fine.

Deirdre: What’s the thing is, it like it’s not export, it’s some other you’re on.

David: I was about to say ICR, but it’s not I can know ITAR.

Deirdre: There you go.

David: International traffic.

Thomas: And I don’t I know you guys wanted to hit this because it came out today and also because it ties into what you guys want to talk about next. But also, I don’t know that the high order bit of this whole story isn’t just, don’t use XMPP if your security devolves to certificate issuance, something’s gone terribly wrong. Am I wrong about that? Am I missing a complexity of this? Or is this just like, why are you idiots using these services?

David: I think a lot of security devolves to certificate issuance. Doesn’t your fly.io deployment eventually devolve to security issuance?

Thomas: If I’m wrong about these things, we have to cut this section out of when we edit the podcast, right?

David: Not answer it, too.

Thomas: My immediate thought is that if you’re using, say, Matrix or Signal for all of your secure communications, the certificates wouldn’t matter, correct? That’s all I’m saying.

David: Yes. So for messaging, there’s probably a lot of cases where and that’s all we’re.

Thomas: Talking about here, right?

David: That’s a property that you want fair the security of your end to end encryption isn’t actually just devolved to the security of your TLS connection.

Thomas: So just the idea that at the end of this story, it’s like these certificates were compromised, ergo, if you were talking on these servers, your messages were probably compromised. Seems wrong.

David: Well, I don’t know if Jabber markets itself as unencrypted. I assume it doesn’t, but this would be true of Facebook Messenger. This would be true of Slack, right?

Deirdre: Yeah, Slack.

David: Literally anything that says it isn’t end to end. This is just how the Internet works.

Deirdre: Yeah.

David: Maybe the better takeaway is, like, if you’re doing illicit things over messaging, you should pick a good end to end encrypted messenger.

Deirdre: Thomas is drinking.

Thomas: Moving right along.

Deirdre: Yes.

David: Okay.

Deirdre: Speaking of end to end.

David: Yeah. So because we do this every episode, let’s do it even more and let’s talk about threat models, cryptography, and then specifically on the web.

Deirdre: Cool.

Thomas: What an excellent segue. What’s motivating this?

David: Well, what’s motivating this is like so historically, the web crypto API, which I believe is on window.subtle, not window.crypto.

Deirdre: Correct. Well, I think it got moved. It was window subtle and then everything I can’t remember which way around I.

David: Thought it got moved too subtle from crypto because some other thing stole a crypto name.

Deirdre: I don’t know. Well, I’ll just check on that. Keep going.

David: Okay. So anyway, it’s generally has a bad rap for a variety of reasons, ranging from the general wall JavaScript to some JavaScript specific reasons that people don’t really like that. It’s like an Async API to the fact that in JavaScript land, malicious JavaScript or other JavaScript can just replace objects on window for some reason. And so you could, no matter how good window crypto or window subtle will get back to you on what the correct object is.

Deirdre: Both window subtle get random values is the nice cryptographically secure random number generator. It also gives you random UUID, which is nice. And then window.crypto.subtle gives you decrypt, derive bits, drive key encrypt export sign, verify, all that crap. But directly on window subtle, you have get random values.

David: So there’s that API. There’s just like JavaScript being a mess. And then there is a plethora of fahild web based end to end encrypted messengers. For example, cryptocat that just didn’t pan out. And then you have signal, which I don’t know if they have this documented on a blog post anyway, but the conventional wisdom is that signal will not bring its app to the web because the web does not provide the security primitives that signal needs to be secure, which is specifically I don’t like. No, I agree. It doesn’t. But let’s talk about what they are and then talk about the cases where you actually need those.

David: It’s kind of what I want to get into legit.

Deirdre: And this is for end to end persistent chat sessions, like long chat sessions.

David: I think that is kind of like the most sort of hardcore end to end encrypted use case, right. Would be like person to person chat of long lived things. Other use cases might be like, I would like a peer to peer video chat that is like ephemeral to short lived messages. I would prefer my stuff to not exist on someone’s servers, someone else’s servers and so on.

Deirdre: Yeah, that is easy.

David: Thomas is stroking his beard. We’re in a new world where we can see Thomas. And so now I can see that he’s slowly getting upset.

Thomas: I feel like Whitaker should should already be listening to us. And that being the case, this is a segment about how when somebody comes up to her and says what we really need to do with Signal is come up with some affordance for people to use it in a browser page, she should have ready answers for why that’s a batshit idea that will never work. So please continue. I’m gearing up.

David: Okay, so let’s go through the things that you let’s say for an app that is using end to end encryption to do something. Let’s go through the kind of things that you have to trust, like somewhat platform neutral. Yeah, right. So base level one, the developer. You kind of have an implicit trust that the developer itself is building an end to end encrypted app and not building an app that posts all of your content to Twitter and then says it’s end to end encrypted.

Deirdre: The consciously intentional, generally knows what they’re doing, developer.

David: This also kind of covers like App Store phishing, a little bit of just like you installed Signal and not definitely totally Signal. I swear I’m signal. Right?

Deirdre: Yeah.

David: Okay, moving on. Then you have developers account security. Meaning in the App Store case, right, the developer signs into some account that can log into the Play Store or the iOS App Store and so on. And you kind of are trusting that they have not lost control of that.

Deirdre: Account, but also their source control, their CI, their possibly their machine.

David: Think maybe covering that separately. But I’m thinking more about distribution here.

Thomas: What’s a model where you don’t have that concern?

David: Well, you don’t really have that concern. You have like a different concern on the web. Right? It’s just like there isn’t an authority that you’re registered with on the web, so you don’t have to worry about the account security with that authority. But you then do kind of have to verify what I would say is the distribution, which is like the next step after that of just like, okay, did I get the correct app from the Play Store? Did I get the correct code from the website?

Deirdre: Yeah, it’s weaker.

Thomas: But this is the whole supply chain problem for Node. Right. This is the place where you bucket like you trust the developer, but somebody got a hold of the NPM entry for that name or whatever and they publish something else.

David: Yeah, although I think like third party dependencies, I was going to treat even separate from this app distribution, I would say almost doesn’t really exist on the web as well because what I would kind of call it instead is privacy of the server side infrastructure. Meaning is the access to whatever. If there is something serving you content or code, is that serving? What is that being viewed by someone or not? And then I would say access after that. Let’s say kind of like in how Matrix it allowed the attackers or an admin or a server operator to basically surreptitiously add someone in and out of the group. Maybe you’re not even modifying the code, you just have some sort of access to the ACLs. That’s another step that is kind of stronger than privacy but weaker than access to the server side code to then which I think is what you’re getting at of just like on the web, the server side content and the server side code are basically the same thing because we just pull JavaScript out of thin air.

Deirdre: The client side content and the server side content are like they’re all generated.

David: The same yeah, there’s no separation on the web.

Deirdre: You could in theory architect it in such a way that you have as much separation as possible from the client app and the back end. But.

David: Fundamentally, if somebody controls was able to modify the HTML that was coming out on your server side, it doesn’t matter what your nice app split is, they can just add more JavaScript.

Deirdre: Whereas between an app in an app store, especially for a mobile app model, the binary that gets built and signed by Apple or Google and then gets shipped down to your phone is just a completely different thing than just I generated some shit that I handed down to you from my web server.

David: Yeah, there is a client side app distribution in the phone case and kind of in just the general apps on the internet case if you just go download an executable but there’s not one on the web. Which then brings us to what we were just talking about, which turned out to be a better segue than I anticipated of like TLS, right? Which is just okay, if TLS is broken, what happens with signal nothing on the web that’s equivalent to server side infrastructure code access, which is equivalent to client side code access. Because once you break TLS, you could just inject JavaScript even if you’re using CSP and shit, right? You just block that header or you add as resources load from somewhere else or you rewrite the page. It doesn’t just have to be like eval, right?

Deirdre: Yeah, like for the other settings, if you break TLS, you still have to do more steps to get anything out of it. Whereas if you break TLS for the web setting, unless you’re doing weird crazy shit like WhatsApp? Who’s like, also install this browser extension that does this out of band web app checking hashing thing on top. Like TLS breaking TLS basically gives you.

David: Everything which is not necessary to use WhatsApp I assume only a small number of people have installed that.

Deirdre: A we want to accommodate some of our users to use WhatsApp web which is like kind of this paired sidecar thingy for using WhatsApp properly on your phone. But they were like because we’re aware of some of these shortcomings with the web model as it currently exists. Please. Also, if you want to be super duper, protect your shit. We built this thing that you can also install in your browser to protect yourself.

David: But anyway yeah, and then there’s the third party dependency problem, which if you kind of separate the actual third party dependencies from the fact that your code is getting loaded at runtime. Right. It looks pretty the same on apps versus web. Although the node ecosystem is obviously notorious for kind of being the worst. You know, given the plethora of react native apps, I think that actually kind of applies to both sides of the Is.

Deirdre: Why is NPM considered the worst? It’s just extremely popular and full of.

David: It’s just very popular and big. As far as I can tell, there’s no difference between NPM and Cargo other than like the Cargo TOML file is perhaps slightly nicer than the 18 iterations of package JSON. And what’s the other one? Yarn something?

Deirdre: Isn’t yarn just on top of NPM anyway.

David: Yeah, I don’t know, I remember sometimes.

Deirdre: Having both I would just chalk the sort of NPM hate up to it being extremely popular and probably fueling the most popular language in the world, which is JavaScript, because you can use it on the web.

David: And then having the lowest quality packages as a result. Yeah, that too like the classic Is even, or the isNumber or whatever. Is number.

Deirdre: Yeah. Cool. So it sounds harder to deploy like an end to end encrypted equivalent of Signal purely served as a web app than for these other settings.

David: Yeah, so basically because server side code access and or breaking TLS is the equivalent to breaking client side app distribution on the web, which otherwise you would have to somehow get into a developer account or an App Store, or the App Store itself would have to be malicious. Like you end up in a worse off world if you are trying to match exactly the security properties that you get for Signal in a native app.

Deirdre: Yeah. And this would basically if you try to do this with something like Signal and just try to just make it work, you’re trying to protect, say, a long term identity key for Signal.

David: Thomas is that a dinosaur?

Deirdre: No, that’s a hippopotamus.

David: That’s a oh, OK, well I can only see part of it and clearly I have animal face blindness, so I can’t tell the difference.

Deirdre: You’re specist, they’re all the same to you.

Thomas: Go on, continue.

Deirdre: Sorry, that’s a very cute animal. You’re trying to protect some sort of long term key material or something related to long term key material that authorizes you to do something with it. And it’s generally just harder to protect that when the software you need to implement that protection lives in this world where a TLS breakage is endgame and all these other things.

David: So I think I’ll pause for a second and say like Thomas, do you agree and then after that, I want to talk about do you need the SIG lake? In what situations do you want to leverage some form of end to end encrypted cryptography for something where you might not need as strong of a threat model a signal that might work better on the web and then, like, what could we change about the web to try and make some of these better?

Thomas: I want to hear what you have to say before I have opinions on this.

David: Okay.

Thomas: And not least because while you were rattling off that threat model, I was rapidly learning about the signal post quantum key exchange.

David: That’s also a thing. Deirdre, do you want to tell everyone about your new job?

Thomas: I read the document. I read it and I knew the list of things you were going to enumerate.

David: Okay. So my posit or hypothesis or corollary or something is that there’s a huge class of applications that would benefit from end to end encrypted stuff, even with the shortcomings of the web. And the example that I would come up with is basically enterprise anything. This is basically any situation where you’re relying on some third party to kind of mediate authentication already.

Deirdre: Okay, yeah.

David: But then you would prefer that a bunch of things that you do do not exist on their servers or go through their servers, but you, by virtue of the problem space, are trusting them to authenticate people already. So, like, corporate video chat from that one vendor that you see in the West Wing, for example, I think could benefit from having their video chat be ephemerally, end and encrypted through a web app, even though it still reduces to the security of the TLS connection because you would prefer that some employee at that the plain text of that does not traffic through the servers.

Deirdre: Yeah, and that’s kind of okay, because one, the long term identity authentication material does not have to live in the client application stack. It is handed to you from someone else. You get a token or something that lives for a scoped amount of time. So you don’t have to worry about the long lived security of that. You just need it for the session. And then the client app that’s doing the end to end encrypted stuff is kind of trust on first use or trust on this use. You have a token that someone else hands you. You load the end to end encrypted video calling app.

You trust it for this call alone. You identify yourself. You do your end to end encrypted web call, and then you hang up. And if anyone was recording anything, it was encrypted and then you’re done and everything gets thrown away.

David: Basically, I guess it’s any situation that’s ephemeral where you’re already delegating identity and access to a third party.

Deirdre: Yes.

Thomas: The root of trust is still the TLS connection when we’re talking about the encrypted video chat vendor, right. We’re saying correct. There’s a benefit to the vendor or the operator. I don’t know. Allstate right. When Allstate is running this thing, allstate would prefer that plain text of this never runs through their servers anyways, just for kind of logistical security reasons, right. The trust relationship is just the TLS connection still. You’re not gaining anything beyond TLS trust from this.

Thomas: But the enterprise is getting an additional kind of logistical security benefit by running end to end security on the web using JavaScript or WebCrypto or whatever. Even though they don’t have improved trust, they still know that they don’t have to worry about plain text being on the server.

Deirdre: Yes.

David: I don’t know that I would, although the security of it does reduce to the TLS connection. I don’t think that the TLS connection is the root of trust necessarily. But I don’t know that that’s a point worth arguing. Right? Because the root is still like, you are assuming that this third party provider that is doing identity and access is behaving correctly to do identity and access.

Deirdre: But then you have the root is.

David: Still like some database that they have.

Thomas: But if I’m an attacker going after that system, if I know that that’s the setup and I want to record conversations or whatever, what I’m going to do is make a beeline for the JS files or whatever and have it feed key information to me, right?

David: Correct. So you’re raising the bar from read access on the server side to write access or to write access to the other. I think that’s a benefit.

Thomas: It’s like a marginal benefit. Right. But one of the problems I’ve always had about this is the realism of the threat models that we’re talking about, where it’s like at the point where you have read access to things that are going through that server. At the point where you have read access to TLS post TLS termination, the plain text post TLS termination at that .1 thing is just that in the classical kind of internet TCP model, right? As soon as you have that read access, you kind of implicitly have write access because you can hijack TCP or whatever, which is kind of silly, but whatever, right? But in reality, also, if you’re able to read the plain text post TLS of that stuff, you’re probably in a place where you can easily backdoor the JavaScript anyways. Okay.

Deirdre: It depends where you’re backdooring it, because in most modern web apps, you bundle your shit, it gets shipped off to a CDN edge. You can serve it to somebody from an edge and it’s going to be a different place than literally depending which TLS termination you want to get. Is it the one that’s the backhaul of the actual WebRTC encrypted? Who do you want?

David: As soon as you get the domain? It doesn’t matter where the rest of the JavaScript comes from because you can.

Thomas: Just edit it in the enterprise model. None of this is coming from CDNS, right?

David: This is coming from well, it’s still coming from like cloud front and shit, right? Yeah.

Deirdre: They’re not all serving it on their own things, they still don’t want to have their own shit.

Thomas: Well, let me say right off the bat that JavaScript cryptography, I can see it making sense in a bunch of enterprise situations. I don’t have a lot of clarity to that thought. But yeah, the intuition that if you can make sane decisions about whether or not you trust your own servers or whatever, or whether the operator of an endpoint should trust the server, then sure.

David: I think JavaScript cryptography is much broader than end to end JavaScript cryptography because it may also just be the case that you have a token from some API that you’re calling. And it would be nice to be able to verify said token in JavaScript, either with a third party library or without a third party library and just be like, oh, this Pasedo or JWT or protobuff token or whatever. Or whatever credential I got was, in fact, signed by the other party. That’s just like a thing that you might have to do in the course of building an app.

Deirdre: Yeah, it would be nice if you could call web subtle crypto verify on a Credential that someone handed to you. And it’s not just only RSA and.

David: That’S just completely separate from the end.

Thomas: End model, but it still comes down to the problem of the mental model I have of this is like you could also just have the API export a method to say do I trust this token or not? And it’s the same security model. Right? Because if you didn’t have that endpoint, if you were relying on cryptography to do that, then the server can just feed you code that will subvert the cryptography.

David: Yeah, but someone needs to implement the do I trust this token or not method. I guess you could implement it with a post to another server. Sure.

Deirdre: And that’s another connection.

David: Yeah, I guess. Anyway, I guess my point is that I think there exist use cases where this threat model does make sense because you don’t want and it’s basically just like whether it’s ephemeral or medium length, it’s just like you don’t want some sort of content existing in a database that someone can run, select Star on or on a stream that someone could tune into by just doing, like RTC, connect you’re.

Thomas: Right. And then in the standard Http, more importantly HTML application model where I can push JavaScript and I can push HTML to you. I don’t natively get the capability of giving you ciphertext, but if I just give this JavaScript blob that does cryptography, then I can be sure that there’s no plain text in my database or I could be reasonably sure that there’s no plain text in my database. Right, yeah, I buy that completely. Right. If people. Go into this understanding that that’s a situation where your trust in the cryptography is the same as the trust in the server. And there’s just like logistical kind of operational reasons why it’s nice to be able to trust the JavaScript instead of having the server export new methods or whatever that is, right, that makes perfect sense to me. Right. And it’s probably true that most of the opinions that people have about kind of browser based cryptography are a based on a really dumb blog post that I wrote like, I don’t know, 15 years ago. But more importantly, that sounded really bad.

David: But Thomas wrote it before I was born.

Thomas: But more importantly, are you 15 years old?

David: No, the blog post is actually older than that.

Thomas: It’s real old. But more importantly, there was a time when there was a monkey knife fight of different secure messengers, right. And there was intense motivation to get more users really quickly for things. And a really easy way to get more users for your messaging application was to not have to install things. This is one of the oldest stories and kind of go to markets for applications is that App Store, Click is Death. Right. If you can just be on the browser, so much, so much simpler. Right? So at the time, there was immense incentive for people to say, well, we can just deliver this cryptography code via the browser.

It’s the same code, it’s the same crypto system, so who cares? Right, well, and messaging, that’s absolutely not the case. It’s the opposite of the case. It’s the worst possible thing you could do. But for the really narrow kind of marginal security use cases we’re talking about here, where it’s more about the person who’s trusting the cryptography is the person who’s running the server. It’s not about the end user, it’s about the person delivering the application. I would personally, as the vendor of this application, not like to have hazmat plain text in my database. If I could just trust my own code to keep Hazmat out of the database, that’s a huge win for me. Yeah, I totally buy that’s the case.

And I also buy the idea that dogma about keeping cryptography out of browser JavaScript would keep you from kind of really seriously considering those things. That seems right to me.

Deirdre: Yeah. And now that you have WASM, WASM is not like an end all be all for getting around the fact that WASM and JavaScript eventually go through an interpreter and things like that.

David: A JIT.

Deirdre: Yeah, a JIT, which gives you if you care about shit like side channels and you care about legit not fucking around with your very pretty cryptography implementation that you originally wrote in Rust and then compiled to WASM, then you shipped into the browser. I wouldn’t care too much about that. But having WASM is a nice compilation target for writing your cryptography in a higher level language than just JavaScript and then you can compile it and ship it and run on the web. I think that is nice to have.

David: Yeah, it addresses the fact that writing code in JavaScript to do cryptography is just the fucking worst.

Deirdre: Yeah.

Thomas: There was a two week period where we were doing our Macaroon token implementation at Fly, where we were like my first cut implementation was in Rust and we were seriously considering doing just so we have this problem of like we write this Rust Macaroon library. But most of the code that needs to consume and deal with Macaroons is in Go. And what isn’t in Go is in Ruby. And it’s like, well, one thing we could possibly do here is just compile the whole thing to WASM. And there’s a bunch of different WASM interpreters that we can run from Go. There was a brief wonderful period of about a week and a half, whereas all of our token code was going to be Rust code, but it was going to compile down to WASM and to evaluate a token from Go code, like server side, nowhere near a browser, you would evaluate the WASM of the Rust.

Deirdre: Yeah.

David: And then you realized you didn’t want to run anything that involves cgo because there’s not like a good pure Go interpreter.

Thomas: No, I don’t know if there’s a good pure Go WASM interpreter, but there are pure Go WASM interpreters. Right. If I could do Sego, I could just go directly to the Rust code. Right, but it’s like, there are pure Go WASM interpreters and then it’s like, how much do I care about whether the WASM interpretation of the Macaroon token code is going to be fast enough or whatever? And it took me like a night to port it all to go, and all the rest of the people were very sad.

Deirdre: So you rewrote the original Macaroon implementation in Go and then you’re just like, nah, we’re done.

Thomas: Pretty much. We just open sourced our Macaron. I think we have the world’s most important Macaroon implementation right now just because no one uses Macaroons, but still, we have one running in the wild, right?

Deirdre: So that’s awesome. I’m putting this in the show notes.

David: Are you going to get a blog about this at any point?

Thomas: Yes, you certainly will. And I don’t mean to hijack it with that. It’s just you said WASM. And I just remembered my only real contact with WASM was not browser stuff but getting my cryptography code to go.

Deirdre: From Rust to but still there’s all these runtimes that are like we are a super fast WASM runtime and we are all these yeah, and I think Wasmtime had this high assurance implementation validation of their thing or whatever. Wasmtime is pretty cool.

David: Who knows?

Deirdre: Having WASM as this kind of new new target that’s like cross platform is kind of sexy. And I don’t know if anyone expected that, but the fact that that kind of fell out of we’re trying to do better bytecode shipping on the web instead of just raw JavaScript is pretty cool.

Thomas: The moral of the story here, as I understand it, is that people are blinkered about browser cryptography because they’re all assuming that the only threat model is Signal.

David: Threat model, yes, and I think there are both threat models in between. And then orthogonal concerns about how much JavaScript sucks are somewhat addressed by the fact that you can compile things to whatever.

Thomas: Yeah, and for me to concede that this is the truth, I do not in any way have to acknowledge DNSSEC.

Deirdre: Correct.

David: Also, you already conceded that this was true like a couple of minutes ago.

Thomas: All right, we’re on the same page. We’re good?

Deirdre: Yeah.

David: Okay, cool.

Deirdre: Nice.

David: We’re on the same page. We didn’t even have to argue and turn out that it agreed. He just agreed up front.

Deirdre: Same origin policy.

David: Before we talk about the same origin policy, a great segue. If you ever hear that at a party, just leave, let’s talk, what would fix this? Is there a way to make the web look more a regular app? And basically what that means is you need some way to separate client app distribution from server side app distribution.

Deirdre: And when Chrome or browser extensions and extensions that were shaped like apps, like installable apps in your browser, that seems to kind of just have fallen away.

David: When those existed there’s isolated web apps. Yeah, a thing I know about because I work for Google.

Deirdre: Sure. I don’t know if I’ve used they’re.

David: Just like Mega PWAs, basically progressive web apps. But you can imagine that there exists like, some way to bundle a bunch of JavaScript code up front and then have some sort of guarantees that it doesn’t load other JavaScript code.

Deirdre: Okay, run it.

David: But then there’s still a bunch of interesting questions after that.

Deirdre: Yeah, because that doesn’t follow the installable store model of like, we track versions of this thing and then we sign the thing and you can chain those signatures up to some sort of authority or whatever equivalent. Because I install shit from the Chrome extension store and it kind of has that, which is nice, but that’s not a web thing, that’s a Chrome thing. There’s a cross web extension sort of manifest thing now with commonish API definitions and crap like that. But it’s still the Chrome Web Store and it’s still the Safari equivalent and the Mozilla equivalent and whatever. Don’t we have site integrity crap? And wasn’t that supposed to help? Or.

David: If we there maybe exists or might one day exist, like a mechanism for saying, here’s a bundle of JavaScript code, go install it once and install in quotes.

Deirdre: Okay.

David: And then you can’t load in new JavaScript code. But then you have this problem, I think you’re saying, is that how do you know when there’s a new version tap code? Who is giving it to you. So you’ve shifted the problem from on page load to there’s a new version and it’s probably automatically getting installed and then what do we do now?

Thomas: But that’s not a small shift. No, that is mostly the same at that point you’re mostly in the same place as native apps.

David: Well what do you mean? Sorry, what’s not a small shift?

Thomas: I’m saying that just as a veteran of a million of these arguments on message boards, the kind of the classical forum argument of web app crypto is just fine because native apps are not as pure as you think they are. Is your native app auto updates anyways? When there’s a new version of the native app, your app store updates it or whatever. Right?

David: Well let’s explore why that I don’t agree that if you just suddenly had an installable auto updating web app that would have the same security properties as an auto updating native app and so why not?

Deirdre: I mean, if it doesn’t do the refresh app pull from server thing, if it is literally if it’s installable bundle, I refresh it. It’s refreshing from disk, not refreshing from the server.

David: Where do they assuming that there’s like an auto update mechanism for the server to push a new version of the app as there just like there is on your iPhone in the US. You’re probably auto updating all of your apps in the background.

Deirdre: So in this update, this sort of model update, it’s pushable from a random server like fly IO or whatever securitycryptography whatever, versus just every time I open the app, it’s pulling from that same server, in contrast to it’s pulling to manifest from well known.

David: And then that says like, oh, there’s a new bundle of JavaScript code available at X, but otherwise there’s no new JavaScript. I mean it’s something like that.

Thomas: It’s worse than every time you open the app. Right? It’s like every time you interact with the app, it’s potentially loading in the web current model.

Deirdre: In the current model, yeah, in like.

David: A progressive web app world.

Thomas: But this is essentially every interaction you have.

Deirdre: Yeah, you might click on something in your web app and it might actually be loading some dynamic JavaScript from some random server in the background and that’s totally fine.

David: Well, probably the same server as last time, but yeah, in the sense the same name as last time. And so we all know that that doesn’t have the same properties, security properties as loading a new app from an app store. And let’s just say why is that? Which is and the answer is it still kind of reduces to your TLS connection again. Okay, is there a way to fix that? I don’t know. I like throw a few magic, some transparency onto it. Does that help? What does that mean? I don’t know. This is me to what you’re actually here is me just kind of brainstorming through part of my job.

Deirdre: All right.

David: We’ve had plenty of episodes where Thomas tries to learn how tokens work so that he can build his Macaroons. And now we’re moving on to just like, David thinking about web security because he has to do that for work.

Deirdre: Like, why couldn’t we have not literally NPM and not literally Cargo, but something equivalent for my Web App bundle? It’s like a store with some sort of like you farm out your developer accounts that give them keys and shit like that. It doesn’t necessarily have to be-

David: You’re describing the Mac App Store except React Native.

Thomas: You’re also describing the Chrome app store. Right? That exists.

Deirdre: Yeah, but it would be a proper app, sort of. And it doesn’t necessarily have to be Chrome.

David: I mean, like, I can there’s only an extension store. There’s no such thing as Chrome apps, right?

Thomas: Well, there used to be.

Deirdre: You could, in theory, make this work without a ton of, I feel like this isn’t a terrible thing to support for the web. You just have to make it work. And it’s basically a hop, skip, and a jump from, here’s my giant Electron App that bundles in a browser to my Electron App just runs in the browser and you pull it from this sort of either community store or Apple Store or Chrome Store, google Store or something like that, or Mozilla Store. I don’t know. I figured that could be something and people get it. And it’s not much different than an Electron App, which, Signal builds their shit on Electron.

David: This question well, who runs? No one wants running a store sucks. Even the companies that run stores don’t want a store too. A store is not the web. So I don’t know. I don’t know. Anyway, this is just where you get to well, this problem kind of sucks because you’re like, I want a centralized entity that you register with, and then you’re like, it’s the Web. I don’t want anything at all. And then you just are sad.

Deirdre: I mean, honest to God, if Electron or whoever produces Electron, which is, I think it’s just an open source project that forked Chromium or whatever and then made it into an app developer platform.

David: If like, Electron ran some combination of Facebook and Microsoft.

Deirdre: Oh, okay. I think if like, Electron started up a store that’s literally it’s just Electron apps, but they run in your browser and it has kind of the update mechanism of Electron apps. I would try those. And depending on how they run their shit

David: You and maybe a dozen other people.

Deirdre: Hey, more than a dozen people run Electron apps. They just don’t necessarily know they’re running Electron apps unless they’re a developer.

David: Know what they are.

Deirdre: Yeah, they’re just beefy and they’re like, why does Slack installed on my desktop eat run as much battery as like, a million Chromes? Well, let me tell you

David: So you can solve it with a store, but specifically for things built with web technologies. That’s one way that you could solve it for some definition of solve other than, like, I don’t know, anyone have any magic? Like, I guess some amount of transparency would at least make it so that if your app was compromised, then everybody else’s apps were compromised. And then a day later, Thomas could tell us about how a bunch of users got their app man in the Middle on Jabber Ru because someone broke a certificate and uploaded a new version of the app. But at least we’d have a ledger of it.

Deirdre: Yeah, it would be like some sort.

David: Of binary waving over who runs the transparency.

Deirdre: But, like, I could definitely see Chrome Google Apple cloudflare to run something some akamai maybe running something like that. And you can opt into it. Well, Chrome would probably turn on whatever it can do by default. But you can also add other logs if you wanted to or something like that.

David: Yeah, I don’t know.

Deirdre: I want the thing that I there.

David: Probably exists some transparency scheme that you could put together, but would probably involve a bunch of people running logs out of the goodness of their heart.

Deirdre: Yeah.

Thomas: This is what Six Door is about, right?

Deirdre: Oh, yeah, sort of.

David: God, I don’t want to go into that.

Deirdre: Okay. All right.

David: One, because I don’t know enough about the details of how it works, and two, just because I feel like everyone gets really mad all the time.

Thomas: Listeners, call in your questions. Two hour podcast reviews in one star review. If you give us a one star review with feedback.

David: Give a five star review.

Deirdre: Yeah, give us your feedback in a five star review. You can cuss the shit out of us if you give us a five star review and ask us your questions. But also you can find this on.

Thomas: If it’s a one star review, I will personally respond to your question on the show. But jeez.

David: Alternatively, go to this American Life, leave a one star review, and we’ll know it’S for us because you’re the only person giving this American Life a one star review.

Deirdre: Yeah.

Thomas: All right. Same origin policy. What’s going on?

David: Yeah, I guess this kind of brings us so all these things are kind of related. And this is just me getting to the fact that the web just fucking sucks, even though it’s really cool. So I guess we’ll pause it like a hypothetical question, which is just like, one, are cross origin leaks, like, trying to defend against cross origin leaks? Can you say that’s important while simultaneously saying supply chain security is important? I don’t think that you can believe that one of these things matters if you think the other one matters. The argument there being like, why are you including cross origin things if you care about supply chain security?

Deirdre: Yeah.

David: And two, just that the same origin policy. Didn’t ever actually work and is kind of a dumb solution to maybe actual problem is kind of my charge hot. Take statements to start with.

Thomas: Okay, say more.

David: Yeah, okay, so we had this problem that was like, you got the web and then you’re like, website A includes an image from website B, and then you’re like, oh, well, send all the cookies that you have for website B when website A uses it. And then they’re like, shit, what if you do actual things with that cookie on website B from website A? And then it’s like, okay, well, you can still send the request, but you can’t read the response in JavaScript. I guess maybe that’ll work. And then you’re like, oh shit, what about I guess, well, maybe we’ll hide the cookies from JavaScript some of the time. And basically my posit is that what we should have just done was isolate storage and move on with our lives.

Deirdre: Could you?

David: The answer is OAuth didn’t exist in the 90s.

Thomas: This is going to be a real short segment.

David: Yeah.

Thomas: Because it comes down to I took a job at Google and now I’m questioning all the first principles of the web.

Deirdre: Well, took a job at Chrome to be sure

David: Well, so the other thing the same origin policy does is it kind of prevents network pivoting in, which if you are on your enterprise network and then you load blah.com, blah.com, can’t start sending requests to your local network and reading the responses. Okay, but I don’t know. Anyway, I’m just kind of like all of this seems really silly because sending a request still has a lot of impact. And I guess my question is what else am I missing here? Or are we just like, this might just be me having an existential crisis that the web security model doesn’t make any sense.

Deirdre: I mean, I feel like I’ve had that several times.

Thomas: That’s what’s happening here. That’s my diagnosis.

David: I apologize for bringing everyone else through this.

Deirdre: That’s okay.

Thomas: You haven’t. I bounced very quickly. I identified what’s happening here. Right. None of us thought the same origin model was good.

David: We just mostly didn’t think about it.

Deirdre: Yeah, it felt like just patching a hole that you didn’t think about when the web as a document hypertext linking cluesy, what’s it was first developed and it’s just sort of keep patching holes.

Thomas: If you had the whole security model to do over again, would you end up with the same origin model? Is that the question?

David: Yeah, I think that’s the question. No. Well, I don’t know because I just think that you end up at this problem of the shipping client code dynamically actually is both really powerful and kind of sucks.

Thomas: Well, I have a lot of thoughts about this and so to start with, I’d like to say the Signal post quantum key exchange. What do we want to say about that?

David: Yeah, we’ll finish this conversation another time to your listener. Thomas, please go on. What Signal as a post quantum key exchange?

Thomas: Deirdre is the one who wanted to talk.

Deirdre: Cool. So Signal, a couple of weeks ago, announced and rolled out an update to one of the pieces of Signal the Protocol called the [Extended] Triple Diffie Hellman Key Exchange, and they updated it by adding Kyber prekeys along with their existing elliptic curve Diffie Hellman pre keys specifically to try and mitigate, store and decrypt later attacks by people who have a sufficiently large quantum computer that could do something with the Signal protocol encrypted messages that currently exist. This was done a couple of weeks ago, and I was one of the reviewers that looked at the preliminary draft document just before it got deployed. I got looped in late and we gave some feedback and they tweaked some stuff. But a couple weeks later, there was some work done by one of our collaborator reviewers to formally model the updated key exchange using Proverif and Cryptoverif. ProVerif is an automated tool in the symbolic model, and CryptoVerif is a prover tool in the computational model. So they give you kind of different views at modeling a cryptographic protocol.

The computational model with CryptoVerif gives you kind of ways to model, improve things like unforgivability of your signature scheme or something like that. Symbolic model kind of gives, you know, if the attacker can see all these messages and all these crypto primitives work perfectly, what can this give you with lots and lots and lots of runs and things like that? And they found a couple of bugs, they found a couple of nits in the specification, and those are getting updated in the Signal schema. And I think there’s some changes going to eventually get rolled out in updated versions of the Signal PQ Diffy Hellman triple thingy, whatever. I don’t even know how to pronounce it anymore because they changed the name. This came out today. I was very excited, though, because they only took about a month of work to model it.

And as always, seems to happen if you try to formally model an even decently specified crypto scheme, bugs just seem to fall out of it in the process of trying to model it. And that did seem to happen here as well, although I do have to say that there was one of them-

David: What did they screw up? What did Signal screw up?

Thomas: Okay. All right. You know, there’s the X3DH protocol, the triple diffie, hellman authenticated key exchange, which is, like, the backbone of Signal, right? Which is the root. Yeah. So X3DH is I think it’s a 25519 based curve DH scheme. What’s clever about it is that it doesn’t explicitly use signatures. You guys both know it, but I’m just saying it it’s a series of 50 elements that results in an authenticated key exchange, right? So you have this thing, and I’m always a little bit fuzzy on this, but it’s sort of pseudo interactive, where it’s like it’s a messaging system that people use in their phones, right. And they’re talking to each other.

And at any given time, when you send a like when Alice sends a message to Bob, bob may or may not be online at that right.

Deirdre: For the first time.

Thomas: Yeah. And Alice wants to send the message to Bob for the first time. So Alice needs to get enough key information from Bob to do not just an authenticated key exchange, but an authenticated key exchange that hides their identities, that has enough ephemeral key information so that identity hiding key exchange is easy to do when both parties are online at the same time. But trickier to do when one of the parties is not necessarily online. There’s this whole scheme where Bob uploads pre keys to the Signal server, which is like a signed bundle of ephemeral curve25519 keys

Deirdre: Signed with the signing equivalent of their long term ID key pair, which is yeah.

Thomas: So you’ve got the situation where there’s a finite expendable resource that the Signal server holds of prekeys that Bob has uploaded. Every time you do a new exchange with a new person, you’re spending some of those pre keys or whatever. Right. And the protocol accounts for that. Right. I think you get like, a less identity if you run out of pre keys, you get like a less identity hiding thing, or I forget what the whole deal is there. Right. But there’s a run of the protocol that exists when there’s plenty of pre keys, and there’s a run of the protocol that exists when the server is out of pre keys, which means that it seems like a through line for the formal verification work for Signal.

Here is that there’s a case where you have three remember, the whole idea behind X3DH is instead of doing a signed diffie Hellman key exchange, you do three diffie Hellman key exchanges, which accomplish the same thing. And because diffie helman key exchanges are much faster than signatures, that’s a win. Right. So you have like DH one, DH two, and DH three, which is like the normal three DH thing. And then you have DH four, which is if there are enough keys to do the fourth ephemeral pre key diffie hellman. Then you do that and you have a fourth diffie hellman thing and you take all those diffie hellman values together and you run them through a KDF, I assume an HKDF or something like that, but you run it through a KDF and you get a shared secret and you’re done. Right. So the root of all evil here is that difference between the run of the protocol that gives you three diffie Hellman values and the run of the protocol that gives you four diffie hellman values.

Deirdre: Or at least one of the issues. Yes.

Thomas: Right. I think I see it in more than one situation here, right? But either way, the first thing is there’s like a public key confusion thing where you have this problem where the new post quantum thing adds an additional key exchange to the whole thing, right? Which is like the post quantum security thing here, and there’s a confusion attack where it’s like do I have the full pre key run where I have the fourth divi Hellman value? Or is that just the post quantum value? Is that the result of kyber? And it’s like if you were like it’s not encoded such that you could tell them apart just by the encoding. But of course the Kyber key or the Kyber whatever the message is for Kyber is not the same size as the 25519 thing that’s immediately apparent, an actual signal. In reality, that can’t happen. But in a formal verification sense. If in the future you swapped out that kyber key exchange for something that was 32 bytes or you went to some different curve thing, or there was.

Deirdre: A bug that was like it did everything right, but then it spat it out on the wire as 32 bytes out of you did everything right for your kyber, but you had a bug that accidentally sliced it and sent it over as 32. The formal verification is basically saying the thing you wrote down doesn’t prevent this public key or whatever, this pre key confusion thing. And basically they are arguing that if you don’t have these specifically separated encoding things and you ended up giving the weak truncated kyber key to the other party, it could lead to like a weak diffie hellman and make it easily attackable or whatever, something like that.

David: So in the store then decrypt threat model, how does three DH fit in? Is it interactive enough that does it effectively break still function the same for three DH? Or would you need to be like a man in the middle?

Deirdre: So this is for non active quantum?

David: Yeah, well, the pre keys case would devolve to effectively store the crypt. But yes, for non pre keys.

Deirdre: For non pre keys, I don’t know, I think it just basically oh shit, no, I don’t remember.

David: Would you need to be an active attacker or could you decrypt a transcript? That’s an interesting question I don’t have the answer to.

Deirdre: So what they did when they upgraded to PQ is they have the elliptic curve pre keys, the long existing triple diffie Hellman was you’ve got your identity keys that you have your public identity key and this is the thing that you kind of smush them together with your partner’s public identity key. And you do the fingerprint compare that’s up there. You’ve got your elliptic curve pre keys that are signed by the ed 2519 equivalent of your identity key and uploaded to the server. And then you do an ephemeral one when you’re doing your first setup. So you do a Diffie hellman between your ephemeral. This is if you have an Ephemeral and if you don’t have an Ephemeral, you have like the Last Resort Pre Key or something like that that you share with other people. And this is how you do diffie hellman between your identity key and the ephemeral and the other identity and ephemeral and the ephemeral and the ephemeral.

This is the Triple Diffie Hellman. The PQ one, additionally, with the elliptic curve pre keys uploads Kyber pre keys, those are also signed by the elliptic curve identity keys. They’re not signed by a post quantum equivalent. And then you decapsulate your Kyber shared secret and include it in your KDF. This is all trying to protect from just listening, capturing all the public traffic and storing it and decrypting later. I think that fall downs for-

David: I said it backwards; if you have a transcript, you just take three. You use your ten years in the future, you take your X3DH log and you would have the plain text. But right, for pre keys, you would need a quantum computer now to upload fake ones, which is why they can be signed by

Deirdre: If you store everything and sometime in the future dlogs just evaporate, you would be the current old school signal. You’d just be able to derive all the traffic all the way down. If you capture everything for the Triple Diffie Hellman, I think there’s a little bit more work boda, whatever, because of the Double Ratchet now with KEMs, as long as you are doing your PQ Triple Diffie Hellman with your KEMs, your Kyber, you fall back to that as being fed into your KDF to start your Double Ratchet. So you would also be able to fetch that out. But they had some other cute little bugs, too. Although I’m reading the forward secrecy thing, not the forward secrecy thing, but oh, sorry. Yeah, the weak post quantum forward secrecy.

Thomas: Yeah, earlier I was like the pre key thing was identity hiding. And I think it might just be forward secrecy. Yes, either way, because this is like a wait, hold on. I thought I understood the post quantum forward secrecy thing. Everyone, this was published today. We’re reading it on the fly as we go. Right, but the one time post quantum pre key is, in the good case, in the Run, where you’ve got enough pre keys to run the whole protocol. And the signed public key thing, the signed post quantum public key thing is like the last resort key.

Deirdre: They’re all signed, but basically they keep the last resort key and they may use it for multiple parties, I think, or something like that, because they need something up there, or else they can’t start a conversation at all.

Thomas: And they’re signed by

Deirdre: They’re signed by the yeah, the Ed25519 equivalent of their identity key. This is another funny thing in the footnotes where they’re like so there’s no security notion of literally having your signing key and your Diffie Hellman identity key be the same key, because they are, and Signal has done this for a very long time. But it’s also like it always felt a little

David: They’re doing a separate trick, not just three DH, they’re doing the Z, the X-ed transformations.

Deirdre: And that seems to work okay, except there isn’t actually a formalized security notion of doing that. Usually what you do is you have a root secret and some sort of key derivation tree and you would turn that, get a signing key from that root secret and a Diffie Hellman key from that root secret. Not literally have them be like, there is no root, they’re just different versions of each other. Because that’s a thing you can do with Montgomery and Edwards. Yeah. And it’s just kind of funny because they’re like, because of this, we have to just do a hack in our model to make everything go through and just pretend that they are, otherwise that the proofs just wouldn’t go through. So that’s funny to me that that just kind of falls out like the prover, the formal modeler is like, what do you mean you don’t have two different keys? The proof tool wants to kind of tell you that they should be different.

And I don’t know if there’s just been no pressure to synthesize a security notion of what if they are the same key and what does that mean? But whatever.

David: Historically, this has been like, bad.

Deirdre: Right.

David: But I guess I’m not coming up with any specific examples off the top of my head.

Deirdre: Right.

David: Sort of the old style RSA TLS key exchange, right?

Deirdre: Yeah. We kind of talked about the kind of when you’re putting all of these things into your KDF, you might have a triple Diffie Hellman, you might have a four Diffie Hellman, depending on what pre keys and stuff are available to you, whether someone’s online to do some more ephemeral crap. Or in the PQ set, you’re doing three or four classical Diffie Hellmans plus a KEM thing that lets you get a shared secret from the KEM. And the formal model basically spat out that there’s a little bit of protocol specific information that you feed into the KDF for all of these cases. But the KDF does not change depending on whether you do Triple Diffie Hellman, four Diffie Hellman, triple Diffie Hellman plus a kyber, four Diffie Hellman plus a kyber. And the formal model was like, you should not do that because basically you took something that was pretty secure in the classical setting, the original triple Diffie Hellman setting, the protocol, and then you changed it to add a new thing. And you may have introduced a security issue that the formal model picked up on that you didn’t have in the first place. Even though you’re taking Kyber, which is good and post quantum and secure, and you’re taking triple diffie hellman and diffie hellman, which are good and secure in their own way.

And the general, you know, HKDF is generally considered fine and good and all this, you can put pieces together that are all independently considered good and secure, and you can combine them in an insecure way. And the formal model basically was yelling about that. So that’s an interesting one.

Thomas: This is kind of what I was thinking earlier when I said the root of all evil in the formal model here, right? None of these are real practical Signal vulnerabilities, right? But in the formal model here, the big complexifier is the classical cryptography and Signal might be working with three diffie hellman shared secrets and it might be working with four depending on the pre key situation. And then you might or might not have the post quantum situation, right? So there’s this weird variable variability, right? And so when you think about plugging, like the purpose of a KDF is just to glue that shit together, right? Like I have three keys, I have 50 keys, I have 100 keys, what the fuck ever. I feed them to HKDF, I get like a fixed output for that and I move on and life is happy, right? That’s the beautiful thing about HKDF is it passes over all those details, right? But you have an ambiguity there of do I have three classical at a post quantum or do I have four classical and a post quantum or do I have four classical and two post quantums because I have a post quantum pre key or not, right? And in the formal model it’s like the input to the KDF appears to just be the keys, right? So it’s like nothing disambiguates those cases. So the formal model is fucked at that point, right? Everything is cross protocol because ultimately the whole point of all these protocols is to get fed into HKDF to spool out your share of secrets, right? But in reality there’s this blob of metadata, the info blob or whatever that encodes what the keys are and also a bunch of other random metadata in the protocol and the info blob disambiguates all of that stuff. So it’s not really in any practical sense a problem here. But then the info blob itself is not formally specified. So now Signal knows we should super specify this info blob because it’s actually a load bearing part of so it’s interesting that that blob of info stuff there. Now we recognize that’s load bearing.

Deirdre: The info blob basically says like, “I am Signal, I am using curve 25519, I am using Kyber” and that’s it. And it did not vary, it basically says I am either supporting post quantum or I’m not. It did not encode, “I am doing three diffie Hellmans and a kyber” or “I’m doing four diffie Hellmans and a kyber” or I’m doing whatever, it did not encode the fact that this thing will vary based on the different kinds of keys and the different numbers of keys that are being fed into the KDF. And that would be the cross protocol attack part. So it’s interesting.

Thomas: And then there’s like a final finding here, which is the KEM re-encapsulation attack, which I will summarize as saying I don’t understand this at all.

Deirdre: This is kind of cool because this is where KEMs and Diffie Hellman key exchange kind of start to skew and you have to think about them slightly differently. So if you’re doing a diffie hellman, you give someone your public key, someone else gives you their public key, you combine the two public keys and you spit out a shared secret after doing some math with your secret keys. So in theory, there is no way to agree on a shared secret unless both of your public keys have contributed to something. With a KEM, you’re using the secret public key thing to do encryption of a shared secret. And then the other side is going to decrypt the ciphertext of the shared secret using their secret key. It’s sort of more like public key encryption than really anything close to a Diffie hellman. And basically what they said is basically there may be an issue where you are using the shared secret that you get from your KEM, but because you aren’t committing to the specific public key of Kyber that was used in this KEM, that encrypted the shared secret, the shared secret doesn’t have anything to do with the secret public key pair. It’s just randomly generated.

And then with this kind of public key encryption stuff of the KEM handed over, they’re basically saying that you cannot just rely on the indistinguishability, like, chosen ciphertext attack security of the KEM. You have to commit to the public key that you used to do that encryption as well. And this is not a thing that you have to think about when you’re just doing kind of classical diffie hellman. But you do have to think about it when you’re shoving a KEM into a protocol that was completely designed around Diffie Hellmans in the first place. And I forget what the actual attack is. But this is not the same as committing to your public key in something like a Schnorr protocol or you’re turning your identity protocol into a signature scheme. But it smacks of the same sort of thing of you need to commit to the public parameters of this scheme, this transcript that you’re doing. And in this case, in this protocol, the public key of the KEM is the thing that you have to commit to before you start feeding the shared secret you encrypted with the KEM into your KDF or whatever because you can vary and stuff.

The main issue here is that the compromise of a single PQ public key, so a kyber public key, in fact, enables an attacker to compromise all future KEM shared secrets of the responder. And this is even after the responder deleted the compromised kyber public key and it can be carried out without violating the indistinguishability chosen ciphertext attack assumption. So yeah, you need to commit to the public key to avoid it.

David: Oops!

Thomas: It’s pretty neat, right? None of this matters in any practical sense. Even in the rodents of unusual sizes sense, where classical cryptography stops working, Signal still survives. That because of the way it’s actually implemented as a case study of like, I’ve got a really good working authenticated key exchange and transport protocol and it’s like, can I just plug kyber into that and have it go? It’s tricky in ways that you kind of see coming, encoding ambiguities and stuff like that, but it’s a neat case study.

Deirdre: Yeah, I’m very happy that this analysis happened so quickly because a bunch of crypto people, cryptographers looked at this scheme as written down. They basically took the original triple diffie Hellman specification, which was fine, it was pretty good. But when I had looked at it years in the past but then I came back to it years later and I was like, hey, there’s a bunch of stuff here that I would improve generally. If you intend for anyone to ever formally model this or use this to implement signal from scratch and not look at your code, because things are under specified like the encoding stuff and the KDF separation stuff, and basically the formal modeling of it confirmed those things because some of these things are just, they fell out of the original specification, not the PQ updates. But then you put the PQ updates on top of it with the KEM thing and you have to commit to the public key and the fact that you are extending that KDF even more and it’s not varying according to the number of things you’re doing and like blah blah, blah. It’s very pleasing, that one. Those things were discovered so quickly by just doing some formal modeling with CryptoVerif and ProVerif, which I am not an expert in, but I’m glad that that happened.

And then that’s getting shoved out to Signal users pretty soon. One thing they noted is that these things are mostly falling out of the Signal triple diffie Helman and PQ diffie Hellman spec. And then they went into the code implementation that actually gets shipped out to Signal and they’re like, hey, we have these encoding ambiguity issues and they’re like, oh, we don’t actually have any encoding ambiguity issues in the code, but they are in the spec. And that’s one of those things that’s kind of annoying about like, you know

David: Classic Signal. Like, we have this spec, but we don’t actually quite follow the spec because we fixed all these other things about it. ‘Stop asking me questions.’

Deirdre: This is one of the inherent issues of like, if you don’t update the spec, is it any good? And then it turns out if people are looking at your spec and not at your code, and especially if you only have a spec and you don’t have open source code, that might happen more often than not.

David: Worse. Only have GPL code.

Deirdre: Exactly. Like, people can go look at the GPL code that is libsignal or whatever all they want, but then you have to pray that you aren’t accidentally-

David: Then you’ve been tainted.

Deirdre: Yeah, you have to pray that you’re not accidentally violating the GPL by having once looked at the GPL version of libsignal, because just looking at the Signal specification is not exactly nailing down every single detail the way that the code seems to do better. Yeah, this is extremely my shit. This is catnip. I love this shit.

Thomas: Also, it turns out that all the INRIA I don’t know how you pronounce enria or whatever, but all the INRIA people that were doing formal verification have their own company now called Cryspen.

Deirdre: Yes.

Thomas: So there’s a company called Cryspen, which is like Kartik Bhargavan and a bunch of other Inria people that were doing formal verification. So if you want your protocols formally verified, go call Cryspen yes. And they’ll formally verify your stuff, say cryptography security, whatever sent you for a 15% discount.

Deirdre: I don’t know if they’ll actually give you a discount, but they may thank us.

David: The real question is, for extra money, will they? Tell you what’s, fine.

Thomas: They absolutely will by the way.

Deirdre: Yeah, Cryspen is pretty cool because they’re doing a lot of Rust. They’re doing hacspec, which I’ve talked about before, and they’re helping with modeling [openmls] the Rust implementation of the messaging layer security protocol, which is a big mother of a thing. So they’re doing a lot of cool stuff. Give them business, they’ll do a good job.

Thomas: Tell him Thomas sent you.

Deirdre: Tell him Thomas sent you. Oh, and the encoding shit. I wanted to talk about this the other day, but there was a cool paper called Comparse from a related person that worked with Karthik, which is basically formally modeling how to do secure message encodings and message formats, which is a thing that we’ve talked a little bit about, but is a thing that bites you in the ass all the time when you’re implementing kind of cryptography protocols, which is like, how do you send these important cryptographic blobs of material over the wire in a way that doesn’t have parsing issues or ambiguity issues or weirdness over the wire in a way that a kind of symbolic level attacker can leverage. And from my perspective, it just seemed to be a bit of kind of folk knowledge that’s kind of handed down. Like, okay, you do fixed length encodings and if they’re not fixed length, you have like a specific byte that tells you how long the next field is and what type it is and that you cannot vary and shit like that because it’s all kind of folk knowledge of attacks that you’ve seen before. And so this is just sort of kind of lessons learned of how to do secure message encodings and formats passed down, but not with any sort of formal notions underpinning it. And the paper Comparse basically does that for the first time. They have a formalized framework of notions of how you create these secure message formats and what those notions are.

And I think there’s four main ones of them. I loved it, and I think it should be must read reading for any secure protocol implementer or anyone that has to take the secure protocol bytes of whatever crypto protocol they’re doing and send it to somebody else. I think it’s important reading, and maybe we’ll talk to them more about it some other time.

David: So you’re saying I shouldn’t use Python Pickle for

Deirdre: No, no, don’t do that.

Thomas: That was pretty weak.

David: That was a weak one. Yeah, that was a weak one.

Thomas: I thought it was a good read. I had no problem reading it. While you guys were talking about the browser security model, I recommend everyone go read it.

Deirdre: The blog post. The Cryspen blog post. That one. Yeah. Cool.

Thomas: There’s a paper. I assume the paper is equally easy to read.

Deirdre: I think they have their blog post. They have their actual models in Proverif and CryptoVerif. I don’t know if they’re doing much more beyond that. Maybe if they’re doing a paper, I haven’t seen it yet. That might be it. There’s a nice read me in the GitHub repo. Yeah.

David: All right, Deirdre, take us home.

Thomas: I was just going to say we have an excellent next episode coming up. I’m very psyched about it.

Deirdre: Right.

Thomas: I’m not going to say what it’s about, but it’s going to be great. But, yes, this was fun. Good talking to you.

Deirdre: Yes.

Thomas: Take us out, Deirdre.

Deirdre: I will not interrupt you. Nice little teaser. Yes, that’ll be happening in a little bit. Cool. Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek, and David Adrian. Our editor is Nettie Smith. You can find the podcast on Twitter @scwpod and the hosts on Twitter, but we’re not really on Twitter that much. I’m on BlueSky, @durumcrustulum.com on BlueSky. You can buy merchandise at https://merch.securitycryptographywhatever.com. Thank you for listening. Bye, hippopotamus.