• https://hovav.net/ucsd/dist/draft-shacham-tls-fasttrack-00.txt
  • https://crypto.stanford.edu/~dabo/pubs/papers/fasttrack.pdf
  • https://datatracker.ietf.org/doc/html/rfc8446
  • SoK: SCT Auditing in Certificate Transparency: https://arxiv.org/pdf/2203.01661
  • A hard look at Certificate Transparency, Part I: Transparency Systems: https://educatedguesswork.org/posts/transparency-part-1/
  • A hard look at Certificate Transparency: CT in Reality: https://educatedguesswork.org/posts/transparency-part-2/
  • E2EE on the web: is the web really that bad? https://emilymstark.com/2024/02/09/e2ee-on-the-web-is-the-web-really-that-bad.html
  • Launching Default End-to-End Encryption on Messenger: https://about.fb.com/news/2023/12/default-end-to-end-encryption-on-messenger/
  • ekr’s newsletter: https://educatedguesswork.org
  • Over 25 years of ekr RFCs: https://www.rfc-editor.org/search/rfc_search_detail.php?sortkey=Date&sorting=DESC&page=All&author=rescorla&pubstatus[]=Any&pub_date_type=any

Subscribe to his newsletter at https://educatedguesswork.org/

This rough transcript has not been edited and may have errors.

Deirdre: Hello. Welcome to Security Cryptography Whatever. I’m Deirdre.

David: I’m David.

Thomas: I’m also David.

Deirdre: No, you’re not. And we have a special guest today, Eric Rescorla. Hi, Eric.

Eric: Hi.

Deirdre: Hi. Welcome. For those who don’t know, Eric Rescorla has been intimately a part of designing a more secure Internet. And we’re very happy to get to have a big deep dive on various Internet security things, including SSL and TLS. How did you get started on SSL stuff, which we don’t want to use anymore, and we’re a big, big part of TLS 1.3, which is like the most modern network security, Internet security protocol that we do like and do use.

Eric: So, yeah, so if we scroll back to, like, 1993.

Deirdre: Oh, God.

Eric: I worked for this startup called Enterprise Integration Technologies. It was in Palo Alto that sort of did a lot of DARPA contracting and stuff. And the people there were very enamored of the web. And when the web first came out, everybody sort of knew that, like, the web needed some kind of security, and there was like a bunch of flailing around in that area. And as part of, like, one of our contracts, we actually started trying to design some security for the web, and we designed this thing called Secure HTTP, which did not take off. And at the same time, Mark Andreessen actually worked for EIT for a very short period of time and was sort of involved with that project. And then when Mark went to found Netscape, they decided they wanted something, and they decided that security was bad and they should build something better. And they built SSL in a retrospect, like they were right and we were wrong.

Eric: You know, we had sort of this design that was very tightly integrated with HTTP and was like much more like message encryption and message signatures. And they were like, well, we’re just gonna do something simple, which is SSL. And so, you know, if you’ve read that famous, you know, Dick Gabriel versus better design, this is a classic versus better design.

Thomas: Can I stop you and ask a question, just out of morbid curiosity?

Eric: Yeah.

Thomas: How bad was the cryptography in secure HTTP?

Eric: It wasn’t terrible. It was like, you know, S/MIME. Basically, the binding to, basically, a way to think about it is your S/MIME. Encrypting HTTP messages or signing them the weakest part was probably the binding to that identity was not well understood.

Thomas: But SSL got basic secure transport things wrong, the weird CBC mode type stuff and all that.

Eric: We mostly just ripped off S/MIME, or I think it was S/MIME or moss, which you may recall was another attempt to do secure object in question for email. So we were, since, you know, HTTP was modeled on email on 822, right. We were just like, well, we bought all the email stuff. So I think that part like, I mean, I don’t want to promise any of it was very good, but I think that part probably was a little closer. The part that I think was less good was, I mean, aside from being a clunky parts was like we didn’t really have any notion of origin, whereas I think, you know, it was much more like, hey, when we give you a link, we’re going to give you also the king material to use that link, which I think in retrospect was probably a mistake.

Deirdre: And so you got usurped by SSL.

Eric: So I think, you know, so eit spun off this company to do so there for a while there was kind of this like optic of like there’s like a protocol war between security, GP and SSL. But like when, you know, when, when one company is this tiny startup and one company has like basically every bread browser universe, I think, you know, who’s going to win that protocol war. And so eIt spun off this startup called Teresa to basically do SSL, TLS or SSL at the time and also secure HTTP toolkits that I think that’s, you know, the security part was like intended to be like we didn’t just lose. And I went to that startup and then was for a while was doing like SSL toolkits and you know, at some point just like got to be doing a lot of SSL and TLS, uh, largely by default.

Deirdre: I think I’ve completely underestimated how long you’ve been at this because I just didn’t realize. And you’ve literally been, been there since the beginning and blah, blah, blah.

Eric: Yeah, we had a v two, I think we had a v two stack. We definitely had a v three stack. I remember when v two was introduced, as you say, v one had actually a number of really quite serious v one never really sunlight a day. It had a number of fairly serious errors in it. V two was better.

Thomas: That’s another thing I’m super, another thing I’m super curious about that I’ve never gotten a chance to dig into is the SSL world for me starts with v two. And like I’m also super curious about how bad v one was.

Eric: So v one and v two aren’t that dissimilar. I mean, I don’t think I ever saw v one. I think it was like, only presented at some, you know, circulated in kind of like private emails and then presented, I think, a couple of times. And I, like, if I recall correctly, it was like rc four with no integrity check or something. I mean, I think you have to understand how little anybody knew about designing secure protocols back in 19, 93, 94. These things were designed about the same time applied cryptography came out to give you a sense of the timeline.

Deirdre: The big red book, the Big Blue Book. Oh, what? The blue book. Wait, which I’m not.

Eric: Okay, placard iv one was blue.

Deirdre: Oh, my God. All right. I feel I was like a tiny baby at this time. So, I mean, literally, I was a six year old.

Eric: It wasn’t as big. It was like, I don’t know, two thirds as big.

Deirdre: There just wasn’t enough cryptography. Now we have too much crypto, and now there just wasn’t enough crypto at the time.

Thomas: There’s a lot of wisecracks I could make right here that I’m not making.

Deirdre: I want you to make wise cracks. It’s our podcast. We can do whatever we want.

Thomas: I got to work in this industry.

Deirdre: That’s fair. Okay, so we deployed v two, v three, and then we turned into TLS one and TLS one two. Sorry, 1.0, 1.1, 1.2. And then you spent years and years and years basically leading the 1.3 effort. Can you tell us? Can you tell us?

Eric: I mean, these were some of like the worst names. There’s not the worst naming in the history community, right?

Deirdre: Because one three is incredibly significant. And it would be TLS 2.0, but for the, like, ossification of protocol numbers on the Internet. Right.

Eric: Well, also marketing, you know, I mean, so like, I think the naming. Right? So everyone, no one wanted to like, be like, for the early versions with SSL V two and v three, no one wanted to be like, we totally rebooted everything and then TLS got renamed because the, itf felt like, you know, if we’re going to take this on, the one concession we’re going to make, not just taking exactly what Netscape did, I guess a couple concessions. And like, one of them was like, oh, we’re going to like, change like, the hash functions a little bit. And another one was, we’re going to rename it. And then like, when we did one three, I think I was worried that people would be really afraid if I would decide to call it TLS too. It’d be safer to like, like suggest it was a minor increment. I thought it was going to be a more minor increment that it was, honestly. I mean, so, I mean, the history here, right, is that there been sort of a series.

Eric: I think we thought it was done like that one two was done. You know, there was kind of like an end of history feel when people were like talking about taking it to draft standard and full standard. Because, you know, the IETF is this goofy, you know, multi tier standards thing.

Deirdre: That no one but IAT efforts really.

Eric: Understand the difference, like practicing everything important like a proposed standard.

Thomas: I feel like I’ve paid a lot of attention to like the TLS working group and all that and not realize this. So at the time of TLS one two, TLS was not a full standard.

Eric: TLS is never going to full standard.

David: It’s not a draft. It’s somewhere between draft and full.

Eric: It’s a proposed standard. Right. So there’s, yeah, so like the RSTs get issued at proposed standard, basically. And then there’s this ornate process that take you to full standard that like people just don’t care about because like. No, for the reasons you’re talking about, nobody can get, nobody can tell. So they used to be three. There was like posed draft and full and then they, and then it got so embarrassing. They got rid of draft and then just mass promoted a bunch of things to full tales.

Eric: One three is also not full. It’s always proposed. In fact, Sean, the chairs have been suggesting we take it to full. And I just told them I wouldn’t do any work on making it, taking it to full. And I was happy if they did it, but I wouldn’t do any work on it.

Deirdre: Yep. This is a thing that like no one in the rest of the world even cares about. It’s literally, is it an RFC? And then they can point at it and it’s mostly static because of the way the RFC documents work. And then everyone inside the IETF is like, it’s not a standard yet, it’s not standard. I’m like everyone else in the world calls a thing with an RFC number a standard. This is all just talking past each other with different words.

Thomas: But anyway, I mean, long story short, TLS is maybe the way that we’re going to encrypt the web.

Eric: Yeah, yeah, that seems likely. Yeah. Yeah. So I think, you know, we sort of thought it was kind of like, you know, kind of done right. And then like a series of things happened. You know, I guess I would say probably the three biggest things are, one, a lot more pressure on, like the privacy aspects of the, of the system where there’s just a sense that way too much of the handshake was in the clear. The second was, I think, quick and quick crypto and the feeling that the performance was not like what people wanted and that like burning a whole round trip. The first 1st bite was bad.

Eric: And those are, those are, I think, the first two kind of things that made people think maybe the decent thing. And then sometime around this time was triple handshake. And that was the first, like real significant attack on one, two in like a super long time. And suddenly people were like, maybe actually we don’t understand the system as well as we think we do. And it was the first, I think, real, real formal and analytic result where someone had tried to model TLS and it come up short. Right.

Deirdre: Resulted in the triple handshake attack.

Eric: Yeah. Yeah. So what happened was those guys were writing the, in Rio, Microsoft Team was writing like a verified stack in which I cannot recall the name of it at this moment. And they like, basically a number of, as I understand it, you start to Karthik. But as I understand, a number of the defects that they found eventually were the result of like basically proof gap in trying to like, build. Like, the approach doesn’t work. Why doesn’t it work? Oh, now we have a problem. Um, so I think, you know, triple handshake came out and was like, okay, now we actually need to do something like maybe to clean up the cleanup things.

Eric: Um, and I sort of thought this was going to be a fairly quick win. Um, I thought we sort of understood. There’ve been a lot of prior art, especially on the, um, on the zero round trip stuff. So there was, you know, going, this goes back really far. Um, you know, so I mean, you probably know about Snapstart, which is like Langley’s thing. Um, but like, actually there’s a paper called Fasttrack by Binet and shockem and eventually turned into a journal paper that I was on that like, had the same kind of thing of memorizing the server’s parameter. So, like, Fasttrack was like, memorize the server certificate and use RSA every single round trip RSA to initiate the handshake. So there’s been a bunch of like, stuff going back very far on how to shorten the handshake.

Eric: And so we just sort of thought, I just sort of thought at the time, naively we’ll just rip these things off and cram them into TLS and. And extensions and call it a day. And that obviously did not work out. It took about five years to get from start to finish, which is, like.

Deirdre: Not unusual, honestly, I thought it was longer. And hearing five years for the monstrosity. Not monstrosity, but just like, it’s a big, significant chunk of work that is 1.3 doesn’t sound that long to me.

Eric: So quick was about five years, too, from, like, start of IETF to finish of IETF. And it probably had, like, five years of Google machinery in it earlier. So I think, you know, it sort of started small and kind of snowballed. And I would say the two things, probably the two biggest things that I think snowballed it were, one, getting the academics interested and starting with Karthik and the INRIA team, but then also, you know, Douglas Stebila and Cas Cremers and his group. You know, suddenly we have, like, a real bench of people you could go to. And Hugo. Oh, Hugo Krawczyk.

Deirdre: I can’t wait to find out.

Eric: Hugo Krawczyk.. So they had a real bench of people you could go to, and you could say, like, what if we did x, Y and Z? Would that be okay? And then eventually get to the point where they say, well, we actually have verified this, and, like, we have some level of confidence that it’s right. And so that was the first thing. And the second thing was, you know, I think this is largely due to Nick. Nick Sullivan was getting a real community of early implementations. And so Nick sort of. Nick sort of the.

Eric: It has been doing this thing called a hackathon, basically, where there’s like a pre couple days before the meeting. And so Nick was like, we need to do basically an interop for 1.3. And really drove that. And so suddenly we had people sitting there for two days actually trying to get interoperability. And I think those two things suddenly basically turned something, which I think was a bit of a boutique effort, into something where it was like a giant cast of people all working on it and trying to wrangle it. I think we plan. You know, I tried to sort of keep this going with, like, we had to do a bunch of workshops very early on, and, you know, there was, like, a lot of uncertainty at the beginning at what this thing was going to look like in general. And some things got kicked off that we actually wanted to keep.

Eric: I would see ech being number one, which, of course, comes back later.

Deirdre: It’s back. Encrypted client. Hello. For those not familiar.

Eric: Right. Yeah. So I think maybe to touch on that for a moment. JLS 1.2. None of the handshake was encrypted meaningfully.

Deirdre: Yeah.

Eric: And this includes the two most important pieces of information in terms of privacy, the server certificate and a client certificate. And 1.3 drives the encryption much earlier on, so that both those things are encrypted. But it turns out that encrypting the server certificate only gets you part of the way there. Yep. Because literally the first message the client sends contains the domain name of the server you’re trying to connect to. So until you get rid of that, you don’t have much in terms of privacy for the server, the client is much improved.

Deirdre: Yeah.

Eric: And so everyone knew this from the very beginning, and there were sort of a bunch of attempts trying to fix it, and no one could quite figure out how to do it without, like, burning a round trip. And since part of the point of exercise was to make it faster, no one was going to burn a round trip. And so we had to throw off the island. And then eventually it kind of came back. Once you figured out how to do it without burning a round trip.

Deirdre: And how do you do it without burning a round trip?

Eric: I mean, you prime in the DNS, which everyone kind of knew. So I think, you know, going back, I kind of sort of say, like, how didn’t we know this? I think that the real thing that happened was we knew it. DNS priming, of course, but no one thought it was plausible. And so the history of this is, you know, we’ve given up. But I was going around telling people, like, don’t expect a cryptocurrency hello. Anytime soon. Like, we sort of thought about it for a while, and there’ve been a bunch of, like, there was an Roc published analysis of the space that kind of, like, has sort of a negative conclusion of, like, how it’s getting really difficult. And I’m going around telling people, don’t, like, expect, like, encrypted climb anytime soon.

Eric: And then one day Matthew Prince, like, emails me and he’s like, the CEO. Yeah. Like, why don’t we just give you Firefox, like, a public key for Cloudflare? And every time you catch a Cloudflare domain, you can encrypt the server name indication. And I’m like, that’s a pretty good idea. But an even better idea is once you realize you only really have to solve the CDN problem, then the cdns control the DNS by and large. And then the problems started to look more tractable. So I think really, no one really thought there was going to be enough DNS imprinting, priming from regular domains. But once you say we’re only really solving Cloudflare and fastly and whatever, something looks like more tractable problem, we pivoted from nothing to we’re not working on deck going to do this to actually know.

Eric: We’re going to try to drive this forward quite quickly after that.

Deirdre: And like having, you know, those are major players. They’re serving or fronting a large part of the Internet. Your cloud flares, your Amazon’s, your akamais or whatever, and like, they do your.

Thomas: Flies IO.

Eric: Exactly. Yeah, flies. No, I love that. I love that.

Deirdre: Flies IO. So, like, something is better than nothing, even if it’s like, literally you have to have a buy in from a DNS controlled provider, blah, blah, blah. On the other side of that, how do you feel less so much? How do you think? But how do you feel about security lives in the gaps between abstractions. Right? And so we’ve got, TLS a cryptographic protocol that we’re negotiating at this layer of, you know, the TCP IP stack and all this crap on the application layer. And then we’ve got DNS up over here and they’re kind of integrating and you’re kind of. How do you feel about making this of TLS rely on these parts of their sort of just reaching outside the abstraction level that you’re designing stuff at? Because I know, I’m just sort of like, it’s not the best, but it’s also the world that we live in.

Eric: Yeah, I mean, I don’t love it. I mean, if you read the document, there’s this long rationale of why it’s okay for this to be in the DNS. Even the DNS is entirely untrustworthy. I mean, I think it’s just clear that, like, we go and we try to design these things that are perfect, and then you reach contact with the real world and you suddenly have to make all these compromises. And one of those compromises is the fact that, like, a huge fraction of Internet just cannot be changed in any meaningful way. And so you can only operate in, like, the zone of what can be changed. So it’d be better if we had some other way to do this, but we don’t.

Thomas: I mean, we’ll touch on this, like, later. But, like, your take is like, you go into like an idealized design and then it hits the real world and everything goes haywire. Right. But like the other way to look at it is you go into it with a coherent design, right? Like with an actual service model with like, you know, some kind of like formal rigor behind it and then it hits the real world and it is no longer coherent, which is a more significant, I think, thing to say. And just in a computer science sense then like, yeah, it’s all fucked up because like the world is fucked up, right? But it’s more than that. It’s like, you know, it’s, in some of these cases the designs we get are just incoherent.

Eric: Yeah, that’s right. I think the question is is can you maintain some coherency or did you just like, as you say, we’ll talk about this later, but I think they’re like, often you’ll see designs that have like, if you sort of took them all together, seemed really sound and then when you realize all the compromises that got made, you actually ask was the thing actually worked the way it was supposed to work anymore? By the time you made those changes, I was going to really, really bitune too, I think maybe to circle back for a second. TLS 13 was sort of like the second big iteration of the model we’re now seeing for protocol deployment, which is you get together like a club of people who are like really motivating control big parts of the Internet and they just do it. And so that’s how you go from, you know, from zero to 50% in a couple of years and how you get, you know, massive quick deployment and massive tails and three deployment and massive HTTP two deployment and HTTP two was like the first was the first generation of that. And I think like Mark Nottingham and Martin Thompson and, um, you know, Infeti and like all the people did that deserve a lot of credit for kind of like seeing how to like get this. And then I think 13, you know, brought in like a bunch of the CDNs as well as kind of the Googles and Facebooks of the world.

Deirdre: Yeah.

Eric: And you see quick following that same, same pattern and like WebRTC is a little like that but not quite as much. But this is like how things happen now because you have these systems where there are four major browsers and those people can roll out and changes inside of six weeks and they’re, you know, and when you add up the big CDNs and Google and Facebook, how you basically control it, a giant fraction of the server traffic on the network now on Amazon as well, but Amazon is a slower mover, typically.

Deirdre: Okay, interesting. I keep forgetting that TLS 13 was the first to be formally modeled. Well, you mentioned that these vulnerabilities were found in one, two with the first attempt to be formally modeled was only late in TLS 1.2. And we only kicked off 1.358 years after.

Eric: Right. I mean, because twelve is released, like 2008, right?

Deirdre: Yeah, that sounds right.

Eric: And by the way, I just go back to the deployment point, like, had almost no deployment. I joined Mozilla, I think, 2012, 2013. It wasn’t even in Firefox and it wasn’t in chrome. And one of the first things I worked on was trying to drag it into Firefox and find a Firefox.

Deirdre: Seriously?

Eric: Because when did GCM and GCM came in with one, too? You were saying? I formerly verified, though.

Deirdre: Yeah, I’m mostly just flabbergasted because there’s been so much analysis, especially for one, three. It was this constant churn of updates every time that there was a new version. And it’s kind of flabbergasting to me that we basically only barely did anything with 1.2 before we turned into one three.

Eric: Yeah, I think my sense of how that happened is that I think two things. One, just no one’s interested in doing in one two. I mean, one two and one one are very similar. One two is basically one one plus new hash functions. And so I don’t think anybody felt like, you know, people felt like they understood the properties of that. Maybe they did, maybe they didn’t, but like, they thought they did. And one three was, you know, obviously, obviously big changes. And people, I think, felt like it was good.

Eric: You know, a, like there was fertile ground and b, was an opportunity to get into the ground floor and do something that people cared about was meaningful.

Deirdre: Yeah.

Eric: And I think also there’s a confluence of, like, the tooling was better, but by the time, by the time we did it, suddenly there really was tooling you could use for this.

Deirdre: Yeah, that’s a good point.

Eric: I mean, you would like, I mean, I would seriously message Karthik and like. And be like, could we try this? And Karthik would come back to you like a day or two and say, like, this seems okay, or I would not do that. And, you know, and I, like, I put together some little proverb model and that just wasn’t possible in zero eight.

Deirdre: Yeah, that’s a really good point.

Eric: So I think we’re nearing a tipping point where everything is gonna be that way. And, you know, to flog some of something you’ve done. I think this thing about where TLS is now going to have a panel of formal verification where everything can kind of go through that I think is incredibly important. I feel terrified every time I tell you how to 20 of these things. And so I want to have a safety net, and so I think that’s incredibly important.

Deirdre: Oh, yeah. Hopefully we’ll get this episode out. Before I send out the summarize from the first round of that, we actually, we’ve done one iteration, which is great, because I was worried nothing would happen. One thing that you said earlier was like, we’ve had our easy crips, and we’ve had a couple of other computational tools that are just too much for a protocol at the scale of TLS. And so you move over into the symbolic provers like tamarind or pervob, and I know they kind of showed up somewhere in 2000, 820 13 or something like that. And having people like Karthick and Cass and Douglas Lihala be able to drive them really helps, because I know from my own experience that I can sort of make sense of these models, but I need to ask them to be like, I think I did something useful. What do you think? One of the things you said earlier that stuck out to me is, oh, we thought TLS 1.2 was the end of history. We thought we were done.

Deirdre: Do you think we’re done with 1.3?

Eric: I think we’re clearly not done. The obvious thing is the post quantum transition, which maybe we’re going to get lucky. I don’t know who I speak for, but I think I speak for a lot of people. And just hoping the algorithms are good enough that we can just cross out the EC and write in modular lattice and call it a day. I wish, but that doesn’t seem to be the case necessarily, at least for authentication pieces. And of course, God help us if it turns out those things aren’t secure, then we’re really in trouble. I would like us to stop messing with things so much on this layer. The question is, what do we have to do with this layer versus other layers? Obviously, it has to be good, and if there’s important things to do, we obviously have to do them.

Eric: If it’s broken, we obviously have to fix it. But the question I would ask is, if I look at the security problems, Internet, are these the biggest security problems anymore for us to apply our effort towards?

Deirdre: Yeah.

Eric: And in particular, part of why I spent most time in ECH is I felt like part of the problem that we had these privacy problems to fix. And I think we’ve made relatively scant progress on those privacy problems and the ones we have made progress on, we’ve done by managing to kind of reshape them into comsec problems and signs. That’s worked as long as it hasn’t worked. And I think if you’re like, ech maybe will work, but like, are we going to solve the problem IP addresses by, like, reshaping everything in a mask? I don’t know. That seems like a pretty tough, pretty tough, pretty tough sell to people that everything have proxied through two proxies in order to, like, protect your ip address.

Deirdre: Yeah. What do they call it for Apple? Like private relay. Private relay, yeah. And it’s like basically their own spin on a onion routing and sort of deal ish.

Eric: I mean, it’s really nice work. I mean, it’s not, it’s good work, but it’s just like, can we really afford to have an Internet where everything is proxy like that? And if not, like what, what do we have?

Deirdre: Yeah. Speaking of improvements or where we can focus our work. I don’t remember what year it started, but sort of trying to assure that the web PKI that the certificate authorities that almost all of TLS connections rely on in one way or another to authenticate the origin that you’re communicating with, the server that you’re communicating with via a chain of trust to some certificate authority. We basically started working on certificate transparency, which is a way for independent auditors can see whether a certificate they’re looking at was legitimately issued by a legitimate. At the very least, there is a promise that it will show up in a certificate transparency log at some point in the future. Can you tell us a little bit about certificate transparency?

Thomas: It seems simple, right? Like you just, all you do is you have all of the CAS sign all the certificates as they issue and then publish them. And then the browser just checks the signatures on the certificate, and it seems like a very simple thing to do. So we solve this problem, right?

David: It’s a Merkle tree. What could it cost? $10.

Eric: Okay, this is going to be a.

Thomas: Quick part of this episode.

Eric: I mean, so it’s probably worth starting with the logic of this design before we talk about the implementation. The logic of this design is what we’re concerned about is surreptitious mis issuance. We’re concerned about mis issuance that is undiscovered and the assumption is that somehow, if things are discovered, we will fix them. And let’s just bracket that and assume that part works, because that’s a whole different piece of pain. And so the underlying design, the assumption is that we will find a way, as Thomas says, which is every certificate in the world, we published an example. And I usually use the analogy of like, there’s going to be like a giant laser that every Ca has. And when they put certificate, they’re going to like inscribe the certificate on the face of the moon. And then when you get a certificate, you’re going to like pull your telescope and look and see it on the face of the moon.

Eric: If it’s not there, you don’t accept it. So that’s the first half, and that’s the part that ensures that it’s published. And then the second part is that everyone who has a certificate, who has an origin, verifies that there are no certificates for their origin, that are not the ones they expected, right? So if you’re example.com comma, you go look and you’re like, why is there example.com with a key I don’t recognize? And if there isn’t, then again you complain and hopefully something happens. What happens again? A little confusing. So like, that’s a logic of the situation, and then that’s like a sound logic. We see that logic being exported to other things, you know, especially for like end to end messaging. I think the implementation, I think gotta get, gotta overseas a little bit in that there are sort of two ways to build this design. One is what you kind of call a centrally trusted design, and one of which you call a zero trust design.

Eric: And so in the centrally trusted design, you would basically say, as Tom has indicated, you would like, publish your certificate to somewhere, and then the place you published it would basically be responsible for guaranteeing that it was available. And so it’s like the easiest version of this is that we’ll say Cloudflare operates a giant database, and whenever you create a certificate, you publish it to Cloudflare’s database. And then my job when I want to validate the certificate is I go and look Cloudflare database. And you can obviously optimize that with Cloudflare signing, double signing it. Then your job as someone who has a domain name is to check Cloudflare’s database occasionally and make sure that make sure the things in the database, this obviously requires trusting Cloudflare. And if Cloudflare decides to cheat and, you know, or the log, as we say, and if the log lies, that is the law. The way the law lies is, is that when I, the relying party, go to check, it, like says, congratulations, here’s the certificate. And when you, the origin, go to check, it says like, no certificate doesn’t exist, and say, you don’t see the surreptitious one, but the verifier does.

Eric: And obviously that may be a little hard if it’s like a big lookup table, but if they’re signing them, it’s much easier. And so like, this is what, like, so it’s like a counter signature scheme. So I think that the design of CT was not a counterintuitive scheme initially. It was something else. It was you call zero trust scheme. And the way the zero trust scheme worked was effectively, I’m going to use the bad word, it put all the certificates on a blockchain. Not like a conventional blockchain, but it’s like it’s assigned chain of blocks.

Deirdre: Yeah.

Eric: And the idea was that there was a merkle tree is a technical term here, as David just said. And then the idea was that when you got a certificate issued, you would also get a proof that it was included in the Merkel tree that you could hand to people they could validate. And then the relying parties had to sum, somehow get the current state of the merkle tree so they could basically validate. They had the same merkle tree state as everybody else. The Merkle tree gives you a compact summary of the state of the world. So the idea was there was supposed to be a compact summary that you could somehow learn, and then you’d have a chain of signatures or hashes that basically chained back to that compact summary. And so as long as an efficient way to like deliver that proof and an efficient way to deliver the compact summary, they’d be good to go. So this is like a very clear design and then runs into piece of reality.

Eric: And the piece of reality is that certificates are issued now in the timescale of seconds.

Deirdre: Yep.

Eric: And this whole compact issuance thing depends on basically folding a pile of certificates into one thing and gluing them all together and publishing that summary. Because in order to validate that, the proof is okay, I’ve got to have the summary. So the thing we ran into is basically that you can’t have both zero trust verification and immediate issuance in the same design that you had to do at the time, at least.

Thomas: This is like uncannily like the bitcoin problem. Right.

Eric: Or the transaction very similar to bitcoin problem. Yeah. And these are all the same, same kind of like physics invariance. Right. And so the problem got solved basically by building in, effectively countersignature scheme. And the way the countersignature scheme works is the log promises that it will include. It gives you a receipt that says, I promise I will include this in the Merkle tree eventually. And that’s enough to make this difficult valid at the time.

Eric: But then the problem becomes basically one of the largest lies it doesn’t include. It has a whole bunch of other.

Thomas: Right. So in the system, like the. In the original coherent system without counter signatures, all you have to trust is that the certificate authority is publishing their certs and driving the protocol. Right. But in the counter signature system, you have two different entities that can now be messing up. Right. The CA or in this counter signature, this promise in the future to include it in the next block or whatever. The log could also lie at that point.

Eric: Yeah, I think the assumption in that case is the log and the CA are colluding and the CA is mis issuing. The log is lying about inclusion. Right. And so in the original design, there were some unspecified but presumably strong mechanism where everyone had a group consensus about the state of the tree. And so as long as that mechanism was secure, then there was no way to basically have the split view where thing was apparently published but not published. But in this case, all that has to happen is the longest lie. And so there’s been a bunch of attempts, kind of like, to unwind that thing by a combination of, like, retroactively checking that things were in the log when they were supposed to be in the. Eventually I included the logic, or I say, that’s one thing.

Eric: Chrome does some of that now. Yeah.

Thomas: So, like. And the thing we’re talking about here is why we have scts as opposed.

Eric: To the receipt, is the.

Thomas: I feel like when I have discussions with people who are skeptical about the efficacy of certificate transparency, like people that are not super TLS literate. Right. The gap that I tend to see is people not getting that. Like, half of this system is, you know, civic authorities publishing and there being a log, and the other half is the browser is modified now to go check for some artifact of that signature. And the artifact of that signature is like, in a perfect world, the artifact would be a proof of the signature. Right. Like something that rolls back to, you know, the signed tree head or whatever. Right.

Thomas: But in reality, what it is is an FCT, which is a signature of a promise to include this in the next block.

Eric: Precisely. Yeah, yeah. And so, like, you know, we’ve been digging deep here on one for a second, right. This is a big improvement. Like even in the, in the, in the. In this counter signature scheme, it’s a big improvement. It’s a big improvement because now in order to cheat. So, like the current log policies, like, typically require, I can’t remember if it’s two or three different counter signatures in the two or three different.

Eric: And so, like, now if you want to mis issue, you gotta, like, convince two other people who have no, no real financial interest in doing it to like, lie about whether things been included in the logic, number one, and that’s pretty hard. And number two is a lot of sort of ca misbehavior is not like malice, it’s just kind of like errors. Now, those errors are typically a sign maybe things are wrong further on the chain. And those things, as long as you actually are and those things like, are brought out by CT effectively. I mean, almost even if you didn’t have the sats, because things just end up. Things just end up in the log. And so we’ve had a lot of problems. Well, we, the community, I haven’t personally has had a lot of problems in CA behavior just by log examination that no one made any attempt to cover up.

Eric: So this is like a big improvement. So I think this sort of, if one wants to offer a critique of like, CT based on this kind of conversation, the critique would be that there’s a lot of infrastructure in CT that, like, maybe you didn’t actually need, because you could have gotten away with a simpler scheme that had the similar security properties. That’d be the critique. I would. I’m sort of implicitly offering here, but I want to be clear that, like, this is actually adding quite a bit of value.

Deirdre: Oh, yeah.

David: And Chrome is auditing scTs. So, like, not only would you need to get to lie, would you need to get multiple logs to lie, you would then need to block forever any Chrome from being able to run that audit and determine that that SCT was not there. And then that same auditing goes back through the same privacy infrastructure as safe browsing to make it back to Google without revealing your specific browsing history. Like an unincorporated SCT will eventually be caught.

Thomas: So the Chrome audit process is interesting. One of you should rattle off how that works.

Eric: I mean, you should, David, since I’m sure you understand it better than I do.

Thomas: Yeah, David.

David: It was finished by Chris Thompson around the time that I joined, so I don’t have the details down. But he did write a full paper about it that went out to, I want to say Yeastnicks or Oakland in 2022, but it uses the same hashtags thing that safe browsing uses, if you’re familiar with that. That’s how it goes through. I’m not super familiar with that, but that’s how it, like, reports back. And then Chrome just tracks all of the scts that it sees and then occasionally pulls down the component infrastructure in Chrome is able to push down tree heads and then the individual browser clients can verify the SCT against the treehead, if I recall correctly.

Eric: Yeah.

Thomas: It feels like a problem that you keep running into with CT comes up elsewhere too, with revocation. Right. But a problem that you keep running into is the simplest way to solve things is to have a browser go talk to something, right. Go post some bit of telemetry and then we’ll drive off that, right. But you can’t do that, right. Because what you’re browsing to, like what things you’re talking to are, is incredibly sensitive, right? So, like, you have this problem of like, Google tracking this stuff, um, you know, Google doing an audit of scts without giving it more telemetry about what people are talking to. Yes.

David: Which is why we have all the. The. Why there’s a delay in determining, like, if SCT was missed.

Eric: That’s actually, I think maybe we talked about the limits of transport encryption a little bit earlier, maybe an interesting pivot into TLS 1.3 and quick. And all these things you see now and ECH are all like decades old cryptographic technologies. There’s nothing in these systems that you couldn’t have built the minute you need to know it. Diffie Hellman and certainly the minute, you know, outlook to Kurt Diffie Hellman, right, and what started to happen over the past 510 years is along with this revolution in how fast we can verify is a revolution how fast we can pull in new cryptographic technologies into the Internet, right? And if we put like the blockchain world aside, which have been incredibly fast about adopting things, they have zero knowledge proof, especially things like that, right. We’re starting to see is a faster adoption of sophisticated technologies, things that are only a decade old into Internet protocols. And so blind signatures. Be a good example. Vo PRFs are a good example.

Eric: And you’re starting to see those really integrated into modern protocols, especially private relay. We’re talking earlier has some of the stuff in it, right? The sort of latest thing that we started seeing is all these new private data aggregation technologies, preo and poplar and things like that that are now being standardized. So if I were to flog where I think the next frontier is it’s private information retrieval, where we’re just starting to see techniques for having efficient retrieval of data without telling the server what it is you’re retrieving. And so the scheme you were alluding to earlier that safe browsing uses is sort of like weak, soft private information retrieval, right? So the cheapest private information retrieval scheme in the world is you just, sorry, not cheap. It’s the obvious one is you just send me all the data and then I have it and then I like look it up in the database, right? And that’s like not very efficient, but it’s like naive, right? So the next most naive thing is what safe browsing does, which is basically you effectively bin the data into bins. And then I say, well, give me one of the bins. And then exactly the same situation. As long as the bins are sparse, then it works.

Eric: Okay. But real Pir doesn’t tell the server anything about the data you’re retrieving. And we’re just now starting to get to the point where we have not, I wouldn’t say efficient, but maybe plausibly efficient for certain scenario PIR schemes. And we’re just trying to see the point where people start to think about using them. And so I think we’re. I guess if I were to flog something, I’d say this is a place to watch.

Thomas: Yeah, it’s interesting, right? Because when I think of Pir, I think about like Apple and Google, right? Like I think about like, that’s a giant like forklift thing that they’re doing in their infrastructure to like, you know, make promises to customers about how they’re tracking the data and all that. Right? But the way you’re thinking about the way what I’m hearing from you is really if you can nail the PiR, like the cryptography and the basic, you know, tech stack for pirates, like if you had that to begin with, with CT, CT would look different.

Eric: Yes, it would.

Thomas: Right? It would be a simpler and probably a more coherent design. If you just had that one little building block to build on.

Eric: I think that would remove a lot of the issues because a lot of the issues Dave was leading to about CT are about how you basically get the current state of the tree. Right. We still have the latency, the latency issue. But that issue, I think has become less important over the years. And I think if you look at designs like Mercury search, they kind of just assume latency isn’t much a problem anymore. Right. So I think there’s a primitive like we could use all over the place, you know, I think that that’s, I guess what I’m hoping we’re going to live in in the next ten years is a world in which, you know, because of the relationships that have been forged between the cryptographic community and the academic community and the protocol community, that instead will have a feedback loop where we say we like something they say was almost here, we play with it and then it’s like suddenly here. Right?

Thomas: Yeah. We should loop back onto the relationship between academic cryptography and, you know, standards group.

David: And as long as you don’t ask for smaller post quantum cryptography, they’re fine with it.

Eric: I don’t know why those guys won’t do that. It’s just like, why don’t they just.

Deirdre: Try harder nerd harder cryptographers.

Thomas: Before we run off the end of CT, I guess I have a couple of questions for you. Right. This is obviously coming from a write up that you did that kind of picked an interesting fight with some of the Google people in a good way. It was a positive thing, um, with, you know, kind of the people working on this, on Google side of things, right? So like we have this basic like this layer of indirection that we’ve added with scts which introduce all these weird problems and like, you know, enable colluding and all that, right? But there’s a window of time within which that matters. Right? Like the fundamental problem that we’re trying to solve is it takes too long to get a certificate fully included in the logs, right? And now we have ATME. Like people just expect certificates to issue instantly, right? So like that’s why that layer of indirection is there, but that’s only meaningful for the time interval during which that’s the case. Right? So in theory you could have a design where you have an SCT thing that then just gets overruled. But I guess I’m not thinking about this very carefully.

Thomas: Right. But I’m wondering why servers can’t just like staple this.

Eric: Oh, David, I see you like twitching there, so you want to jump in?

David: Oh, well, I mean, I was just going to say this goes back to being 2014. Like at the time, the idea of like I wasn’t specifically there, but like the idea, the constraints around CT hitting the real world at the time was, ah, shit. We’re not going to be able to incorporate these things fast enough. So we’re going to have to have a merge delay, so we’re going to have to have scts in practice like this. Just like, isn’t true, isn’t true anymore. I don’t know, like how true it actually was. Like whether this was something that we thought was true at the time because we were trying to, we thought there would be way more CT logs than there turned out to be and that there would be less of a kind of heroic effort to run. There’s less than ten CT logs in practice as opposed to something that every CA runs as well as we thought.

David: Oh, we’re not going to be able to incorporate, be able to give a response to a certificate fast enough without scts. And so immediate incorporation was kind of seeded in exchange for making this something that was actually deployable. But if you go and look at Filippo sunlight logs, like, it still has scts, but the SCT actually only exists after the certificate has been included in the logs. And so this was really just an artifact of like, what the performance characteristics were at the time versus like, where we ended up with. And like, one thing like I repeatedly find myself doing at Google is someone will come to like, chrome security and be like, I have a problem. And I think it could be solved by a transparency log. And then we tell them not to use the PKI’s transparency log, then we tell them not to use scts. And then usually whatever their problem is, actually they’re like, there must be a different shape of a problem we can get where we don’t need a transparency log because like we’ve just been discussing this whole space is pretty hard.

David: And so scts are really an artifact of a coherent design hitting the real world and then the constraint being performance, like, at the time, we can’t do this fast enough.

Thomas: So the thesis here is there was an original design with merkle trees and a lot of mechanisms, um, that was sort of based on like a coherent idea of signatures being signed. And then we had this layer of indirection which may or may not be required. Right. We have this like counter signature scheme, like the end of your like second post, that, which is, it’s awesome. We should have a link to it. It’s a really good, like, it really helped me kind of understand the dynamics of the system that I kind of just understood in the abstract before. Right. So it feels like there’s a take that you have that like, there was a simpler design here.

Thomas: And it’s kind of, it’s obvious what that simpler design was. Just like a pure commerce you know, signature system. And I’m curious because I don’t have this, although all the dots connected to my head. What is that simpler system that you’re talking about looking look like? End to end.

Eric: Yeah.

Thomas: Like if we just got rid of the mechanism you don’t like.

Eric: Yeah, yeah, yeah. So I think there actually are. I guess I would say there are. What we have now is an easy compromise between an attempted zero trust scheme and what is actually that has a counter signature scheme hiding inside it. The counter signature scheme hiding inside it is throw away the merkle trees and you just do scts and nothing else. No one bothers to audit that. The logs are obvious behaving and you just live with. The assumption is basically that getting three log, three logs to lie is not a practical.

Eric: Is not a practical attack. Right. That’s the simple scheme hiding inside it. Right.

Deirdre: You’re just signing signatures, basically, literally just.

Eric: Scts, exactly the same thing you have now, but no more call trees. And then when you want to audit, like your name doesn’t appear in the log, there’s something where you just like, it’s a rest call and you just download all the certificates. There’s no.

Thomas: Oh, this makes. This makes a lot of sense. So this is basically a stochastic pinky swear, right?

Eric: Yeah.

Thomas: It’s like a single pinky piece where is not valuable because that person could lie. But if you get enough pinky swears, then, like, you’re eventually asymptotically converging on something you can trust. And that’s a really simple design.

Eric: Yeah, that’s saying that’s a simple design. And I think that the, you know, it is less good than the design that sort of David is talking about that actually has like, validation. It is just simpler. And so I think, you know, I have to admit, when I started writing this, I did not know Chrome was doing anything, because I read the early paper that had chromed it is that they sort of looked at this and kind of given up. And so I think that, like, the property of Chrome scheme is interesting. I’ve been sort of struggling trying to figure out the properties of Chrome schemes. Chrome scheme is because in many respects, it’s actually trusting that Google isn’t going to, like, lie about the context of the log and.

Thomas: But then it doesn’t make any sense for Google to lie.

Eric: Right.

Thomas: Because they control Chrome.

Eric: No, sure enough. But then why don’t they just issue all the sats that would be done, right. I mean, I guess this is what I’m sort of saying is like that the, if it is, if what you’re trying to do is a fully public verifiable system where nobody can lie, that’s not what we have. And we have something intermediate between that because we have Google doing this. But does anybody else benefit from that? I’m not quite sure how. Right. And so I think the interesting question to ask. So I think there’s like no one I think is going back to counter signatures as the only thing.

Eric: Like that’s clearly not the way people want to play. The interesting question to ask is given that some of the constraints that we were just talking about have been relaxed, that maybe you can sign much faster, maybe you can have a local corporation much faster, maybe we can find a way to look things up without having to like burn a bunch of like privacy, privacy budget. Um. Is there a new design which has the, which meets the zero trust aspirations that CT had but actually is simpler in some ways and I think that you see, see groping towards that.

Thomas: But this is like uncanny. So you have um. You’ve spent your whole career deeply engaged with the idea um. With standards work and like shepherding things like this. Right? Um. So like. And I’ve spent my whole career throwing stones at that process and not being involved with it. Right.

Thomas: But it’s just like. It’s uncanny how. So this is a successful design, right? Like where you know for one thing like the, like the gap between the original coherent design and the, the model that kind of emerged that you’re talking about in your posts and all that. Right. Um. It’s potentially closing anyways just because technology is getting better and like the new CT logs will be able to do like you know um. Quick merges and things like that. Right.

Thomas: Um. But like the exact same kind of phenomenon with like we had a service model and the service model wasn’t quite right and it was designed around like the technology that we had available to us at the time. But then because the ITF and because the industry really uh takes so long to put things in the field like that, the landscape is different by the time it meets reality. You’re talking about DNSSEC, right? Like this is exactly the same thing. So like here with, with CT it’s just like it’s, it’s the SCT model versus direct signature stuff or the privacy aspects that are whatever. And DNS sec, it’s like online signer. It’s like are we building a protocol where the servers could actually do cryptography on the fly? Or because it’s 1995 you can’t really expect anybody to run any kind of encryption algorithm on demand because it’s too slow. But it’s the same thing where a lot of the incoherence and the weirdness of the protocol one hits the real world.

Thomas: Is that phenomena? It just keeps happening inside the IETF.

Eric: Well, I do want to briefly defend the IETF on the CT front in that all those compromises were actually made by the original CT team and not by the ITF. The ITF, the ITFCt version is everybody’s good or everybody is bad. It’s not CT submission. So, like, while the IETF has bungled many things, it didn’t bungle that one.

David: It’s difficult. The interaction between browsers and CA is whether or not it’s the CA browser forum or the IETF is involved is simply an inherently very complicated dynamic for several reasons. And so SCTs were an outcome of responding to CA’s concerns that they needed to be able to do effectively immediate issuance.

Eric: I think the DNS, I love this DNS segue, though. That’s a great segue. I think you may be a little bit being too generous. There definitely was a question to ask. Always in retrospect when you have these designs is how many of the constraints were real and how many constraints were imagined, right. And it really was true that like digital signatures were like incredibly slow and like painful to do online. And then there was also. But there’s also like this thing about how we didn’t want to have the keys, like online in the devices because reasons, right.

Eric: And I think that was much more of an imagined constraint than a real constraint.

Deirdre: I feel like you need to do that to use them to sign things.

Thomas: Yeah. So I want to be careful about how I dunk on standards processes in general, and I should also generalize, right? Like, my moral enemy is standards processes and not the IETF per se. IATF, I’m sure, is one of the better ones. Right. But like, the funny thing about that is it’s true that like, in the original DNS, you know, in the original DNS X standardization stuff, like some of those concerns were real at the time, right. But you didn’t get it anyways, right? Like the benefit that you were going to get from offline signatures, from offloading the cryptography from like the hot path of the protocol, right? Like it didn’t get deployed, so that by the time it did get deployed, that no longer made any sense. And it contorted the whole design of this thing. Like every problem got drastically harder because of this one constraint that you put in because like, I don’t know, like, it just seems like.

Thomas: And again, I think it’s a shared thing with the industry and with the standards groups. I also think it’s real easy for me to say this stuff. But yeah, it’s like you couldn’t look ten years into the future and see what this is going to actually look like by the time it actually gets deployed. And we were talking earlier about the lag it takes for standards to get done, saying quickest five years. That sounds fast to us. So it’s like if you kind of go into this expecting that it’s not going to be widely deployed for ten years anyways, and that’s where you should be looking.

Eric: Agreed. Yeah, I wish I had it out because I have this talk that is going to turn into a newsletter, probably more than one newsletter post, actually, about how to design protocols that get deployed. Based on my experience, both succeeding and failing at that. The frame for this is actually the alternate reality where instead of TLS and the web, PKi and quic and whatever, what we instead have is DNSSEc and IPsEc, and everything runs over DNS, SEc and DNS. Everything was ever IPsec. And that obviously is not the reality we find ourselves in. And how do we get into Earth one? And I think that the argument he made that that’s actually a more sane, coherent design than the giant mess of things we have now that is fixes on fixes on fixes. But then you just see there was not a path to get to where that there from here.

Eric: And my sense is, a lot of things we learned is don’t design in a vacuum. Right? And I think a lot of that designs were done in a vacuum. And if you look at the, you know, look at the CT design or the quick design or the River TZ design or the H two design, you know, those designs were done in collaboration with people who had real deployment incentives and really were going to try them really early. And so you got very, if there were feedback to say the thing wasn’t going to work, you got that feedback early and then you had to respond to it. And my sense is, you know, if you look at TS 1.2 and we did it, we designed that entire thing without anybody. There are no implementations. We just papered. It took years for people to implement it.

Eric: And if you look at 1.3, we had implementations from the very beginning, basically, even when it was basically just barely existing drafts. And so I think that DNSSEC design was built on these type of assumptions that people just understand weren’t right because they had to make contact with the enemy. They couldn’t tell. The other thing I think that you’re being a little too nice about, actually, is that in the world where you do have something like the web PKI, the actual value proposition of DNS SEc is quite weak. Even if they were supposed to do. If I lie about the IP address but the person has a certificate, it’s basically an attack. Basically, it stops them from connecting at all, but it doesn’t let them impersonate the server. So when you go back and you say that, um, you know, you say like, well, did I like how I have all this apparatus that doesn’t, again, it doesn’t.

Eric: That only provides you a very small amount of value when the, when the real problem is, you know, the confidentiality of the DNS, of the DNS connection more than the integrity. Right.

Thomas: You know, I mean, obviously I think all of us in this conversation are aware of, you know, my take on DNS sec, right. Cause I’m very noisy about it, but like, I’m kind of stipulating, like, just stipulate that it’s solving a problem that’s worth solving. Right? And actually, when you talk about Dane, you’re closer to a thing where there’s a live conversation to have about, like, you know, whether that’s a real thing, right. But I’m still kind of stuck on just a fundamental, like, it’s the outcome of the design itself, right? Assume that the problem statement is good. And like, where you landed with DNSS is messed up in a bunch of ways. Right? So it’s like, there are reasons why it’s not, like, deployable. And we’re talking, for instance, about all this, you know, this stuff that we’re running into with the design for certificate transparency, solving the problem of, you know, um, attackers can suborn the central authorities that issue certificates, right? And the reality is that the overwhelming majority of DNS sec, DNS sec signatures are also centralized, right. They’re done by registrars or whatever, right? So you have the same problem there.

Thomas: And like, if you bring that up in an argument with somebody who supports DNS sec, right? Like, the answer they’ll give you is, well, we’re just going to make, you know, certificate transparency for DNS sec, right? And for a lot of reasons, maybe I’m wrong about this, but I’m going to say this confidently for a lot of reasons, that’s not going to happen. Right. Like you don’t have any of the incentives to get that worked out. But also you’re going to run into the same problems that CT is running into and they’re harder to solve in the DNS context.

Deirdre: Can we review how DNS sec works and how Dane works and why we’re all groaning at the thought of transparency logs for DNS sec?

Thomas: Well, I mean, it’s also possible that Eric disagrees with me.

Eric: I think I mostly, mostly agree with you. I mean, so like the DNS is a hierarchical system, right? You know, so if you have, you know, if you know, everything starts with dot and then dot, you know, owns, owns all the names under it and they assign who gets.com and then assigns who gets example.com. and if you were to have like, you know, monkey dot example.com comma that they would assign who got example.com would assign who got monkey, right? And so what DNS sec is, is like the dead obvious thing, which is that though each of those assignments is signed and the assignment is obviously like effectively certificate, it’s a signature over the name and it’s got a key, right? And there’s like a giant amount of like bananas infrastructure to wedge this into DNS when never intended to go in, you know, from like an era, frankly, when people didn’t really know how to design protocols that like work sensible. And so like you really had to force fit things in a bit. Uh, you know, I’m not sure we know how to design with a sensible nav, but like we’ve learned some things about how to design them, they’re sensible. And then at the very end of it, like you have a bunch of records associated with zample.com or whatever it is, the leaf node, and you sign the records. And so that’s very straightforward kind of design modulo all the preliminary problems that Thomas is mentioning, right? And so the natural thing to say once you have that is why doesn’t one of the records that I put in there just be the public key of the server? I mean, I wouldn’t need the webp at all because it’s global key to the server. And so what Dane is, and it’s like that was like people had in their heads I think at the very beginning, right? And then by the time that like there was like substantial DNS deployment, I guess for some value, substantial.

Eric: And you know, we had like massive webpi deployment and so it wasn’t really like, and so then they were confronted with like the situation of the worlds. They found out as opposed to the world in which you ideally wanted. And they were like, well, what we’re going to do. And so I think if you read the indictment that the NSX people would make of the web PKI is it has two big problems. One of them is that, as you were alluding to Thomas, there’s too insecure that there are a zillion, they would say, certificate authorities. Any single one of them can issue a fake certificate. And the second is that it’s too expensive and too high friction, which is to say that you had to pay the CAS first certificates, point of order.

Thomas: A third thing they would bring up is x 509.

Eric: Oh, true enough. I always find these arguments about spelling to be quite tedious. All protocol encodings are awful.

Thomas: Yeah. It’s funny how big a deal it is if you go read the working group stuff. One of the big enemies there is the derisation of IETF. It’s culturally foreign. So it’s one thing we want to do is push this out and get something that’s a more ietfy kind of design. But I’m also scrambling the point you’re making. The fundamental issues, anybody would, if you’re going to be especially fair to the Dane people. Right, the Dane people, that sounded bad, but whatever.

Thomas: If you’re going to be fair to the argument here, right? Like things that anybody would point out about the web PKi, one too many issuers, no way to trust it, right? Um, and two, it’s expensive. Or was at the time when Dane was like a big thing, it’s expensive to do it.

Eric: And these are both like legitimate points. Like, I just want to get. I want to concede these points, like, right off the bat. Right. Um. And at the time. At the time, yeah, at the time.

Deirdre: Now you can get all the free ca signatures you want.

Eric: And so I think, like, this is just like a great example of path dependency on the Internet, right? Which is that when faced with these realities, there were two approaches you could have made. You could have. Two approaches you could have had. One is let us burn the thing down and start from scratch. And the other is how do we add a new patch which makes the system even really better in this direction, in these directions. And so basically, the two patches that are reflected on the webpage that are made about the same time as Dane are certificate transparency. And let’s encrypt certificate transparency to solve this too many issues problem, at least largely solve it. And let’s encourage all the free problem.

Eric: And, like, again, I think there’s a fair case to be made. These are not as elegant designs as like, the top down issuance. But the great thing is they’re incrementally deployable. And I think, you know, CT, obviously. So let’s encrypt like was designed from the very beginning to be incredibly deployable by like getting a counter signature from an existing CA, which is the only way to roll out a new CA. And so, like. So what Dane is, is. So to circle back, what Dane is is not quite the obvious thing.

Eric: What Dane is is partly the obvious thing and partly not. It basically has conceptually two modes. One mode is here is a signature over a hash of a certificate or which ordinarily would not validate up to one of your trust anchors. But usually anyway, because I’ve signed it with my domain name, the key name. And the other version of this is here is the hash specificate, which must appear in the chain, but it also must be valid according to the webpi. So, like, one version of this is kind of like pinning effectively, which to say, here’s a list of cas which you must use or keys you must use. And the other version to say like. And that’s designed to solve the too many Ca’s problem.

Eric: And the other version is here is the thing which must be in the chain. But don’t even worry about if it’s a Ca valve that’s designed to solve the free problem. And neither of these, unfortunately has like quite as complicated value proposition you might like as obvious. I think. I think it’s the easiest to reason about the free one, which is to say that, say, you in fact are tired of paying off the Ca to have your certificate and for some reason you can’t use Le or you think this is before le, right? So you say, great, I will get this Dane thing. Now, the problem, the problem you now run into is that you need your web server needs to work with every web browser in the world. And many of those web browsers not support Dane. And until the time when every web browser supports stain, you still have to pay off the CA and get a certificate.

Eric: And so you get no income. I even doing this for years and years and years to come. Right. So I think that basically is a fatal flaw in any of these sort of free versions. And it was like something which when like less than curve was founded, like we took pain not to create the situation for people that less encrypt was a value from days, from day zero, right?

Thomas: Yeah. This is Adam Langley’s big argument about Dane and browsers and all that is like, you know, it’s not, it doesn’t, it doesn’t bring you from a million cas down to one. It brings you to a million and one.

Eric: Yeah.

David: Yeah.

Thomas: Because you’re never going to get to the point where you can just run Dane.

Deirdre: Right.

Eric: Yeah. I think, you know that the time, the time when you are like in permanent Dane is like very, very far in the future. This pinning thing I think like you could argue has a little more value. Um, it’s just very hard to like actually first rate people. It’s worth enough to do, do all this heavy lifting against the backdrop of CT already existing. Right. And against the backdrop frankly of any misconfiguration, your DNS sec causes your site to become reachable. Now it’s like really I’m going to marginally decrease my risk surface area by making my site potentially unreachable.

Eric: If I screw up. I mean, I will imagine this is all kind of a tragedy because if we had DNS sec, lots of cool things you could do with DNS sec and I’ll just give you one which is you could shave off a round trip from like your TLS connections because you could put the pipe, put the public key of the server in the DNS and then you can encrypt directly to it for like second zero. But you can’t because DNS doesn’t work now. I mean could we sneak around and have a web pka signed and whatever? Maybe that might potentially work. So things you could do, another thing you could do is you could delegate, you could delegate my domain to other domains. And so right now, think about it, right? You say like, oh, you want to have your like site hosted on a CDN. How does that actually work? Well, the way that actually works is that you put in a cname record which says example.com goes to like mumble mumble dot like fastly.com or Amazon.com. right? And then Amazon.com has to have a certificate for your domain name, not for mumble mumble, Amazon.com because no one could trust the cdname records because of the DNS.

Eric: But if you could, if you trust the DNS sec, then those would be secure and you have a different scenario. And so like none of these are fake. None of these are like, you know, the greatest feature in the world. But like we are constantly fighting with like the insecurity of the DNS and not doing things we otherwise would do because the DNS is insecure and, but there’s just no incremental deployment path from where we are now to there, that actually provides value all on the chain to get us there.

Deirdre: Can I interest you in a one and a half kilobyte size signature for your DNS?

Eric: Well, exactly.

Deirdre: Your post quantum security.

Eric: Well, how do you talk about two.

David: And a half kilobytes?

Eric: Yeah. Oh, we haven’t even talked about this. Right. But there is quite a bit of DNS validation on the Internet at all. Almost, nearly all happens at cursor resolvers. So the way DNS works, DNS gives you a bunch of records which end up on your machine. But almost nobody talks to the DNS servers themselves. They talk to some local server, which is called recursive resolver, which does all the DS resolution for you, of traversing on the hierarchy.

Eric: And lots of recursive offers do the validation. So, so far so good on that, but, um, that doesn’t get as far as the client. And so the client, and, but all these features talking about require the client to validate, not the recursive resolver to validate, because, um, you know, otherwise cursor resolver can just lie to you and that’s no good because the recursive resolver in many cases is provided by the airplane that you’re sitting on. Um, and like, why do you trust those people? Um, yeah. And every time anyone has like, tried to seriously set the question of whether or not you can actually retrieve all the signatures, all the way from like the original thing down to the client, the answers come out really bad. Adam Lange did something about this, like quite a while ago when I, back when I was at Mozilla, we studied this a little bit and like, the answers were just catastrophically bad. And it wouldn’t be any better, by the way, if you had two and a half kilobyte signatures.

Deirdre: I can imagine, let alone the fact that I don’t think, like, what’s the actual uptake of elliptic curve signatures in DNS sec? I think it’s quite, they’re really poor parameter sizes. I think they’re like p 192 or something like that. Like just migrating. More modern cryptography has been extremely slow, and now we want to replace that modern air quotes cryptography with possibly PQ stuff. So I’m not holding my breath.

David: I think in terms of path dependency and incremental rollout and counter signatures, there’s actually a link there that the problem of how do you roll out? Let’s encrypt. How do you roll out, Dane, how do you roll out? Well, if you wanted to do a counter signature scheme instead of CT. You end up in these. Some clients are going to want to trust different things. So the obvious failure mode for a countersignature is like, okay, so web browser a says, oh, we’ll just do a simpler version of CT where a root program for browser a will just countersign and then browser b says, fuck you, browser a, I don’t trust your signatures, I’m also going to countersign. And it’s like right now you kind of end up in this world where everyone would have to pick which counter signature they were gonna accept or you’d have to send one signature per root program on every connection. Similarly, you can’t start a new CA without getting trusted in advance either by being countersigned by someone else, because there is no way to kind of indicate what it is that you trust within, within TLS and within the protocols that we have on the Internet, which is both good and bad as a symptom of the path dependence. I think.

Eric: Yeah, no, I don’t think a variant of CT that was counterfeit only is a viable story now. I think maybe if we have a time machine it is, but now it’s too late. And the same reason why I think going back to s 509, the same reason we’re carrying all this x 509 baggage around, is because it’s too hard to change and it’s easier to ignore it, right, if you go back to tails 1.3. Right. Tales one three literally has mechanism in it that’s designed to make it look like tales from .2 solely because there were devices on the Internet which freaked out if you tried to send them things that look too much like look too different. And there’s literally a feature in tales one three co designed by, I think, Kyle Nekritz from Facebook and a bunch of people from Chrome that basically is making TLS 1.3 look like TLS one do session resumption. And because the chrome guys discovered that there were too many errors from the middle boxes and Necrotz was like, well, why don’t we just make it look like resumption? And everyone kind of was like, well, right. And that’s what we did.

David: My claim to fame is I taught Kyle Nekritz at Michigan, all right? And then I was still at Michigan when he fixed TL. 43 PhD yet.

Deirdre: Okay, before we switch out to another thing, we talked a little bit about these post quantum signatures are too damn large and we talk a lot about how that affects, well, we the people who are working on this sort of thing in TLS. And you know, David occasionally will ask me like, are you sure there aren’t any smaller fucking PQ signatures? And I’m like, there’s one and they’re ghosting us all and everyone’s got their hands bit by psych and the sadh being killed. But usually we’re thinking about how do we upgrade TLS 1.3 handshakes with all five or six signatures in a handshake, including these set signatures? But also, how do these big effing signatures and public keys affect actual operation of CT logs? If the answer is just literally, they are slower? Again, are we finding ourselves back to something closer to when CT was originally rolled out? Which is like, we thought it would take longer to get these into the log, therefore we have scts and not just here it is in the log, go look it up immediately. Or is that just sort of like, no, we’ll deal with it, we’ll just hash faster or something like that.

Eric: I think actually, David partner seems better than I do. Happy to riff on it.

David: In my experience, if you imply you’re going to delay a post quantum transition until the cryptographic options get better, a variety of cryptographers will get mad at you in real life. But I think that is an interesting question that I’ve been thinking out a lot recently about. How hard should we try to rework the PKI to both get better requirements, get the properties that we want better, while simultaneously achieving post quantum and sacrificing some amount of something else, probably, versus can we just sit back and wait? And maybe the options will look better by the time we’re deploying this anyway, especially if we focus on the migration aspects of it before specifically what it is we’re migrating to.

Eric: I feel like there’s three angles people have taken on. This. One is bite the bullet and just be like, oh, we’re just going to cross out, you see, and write ML DSA everywhere and everything will suck and be slow. The second is, I think, the one you just alluded to, which is watchful waiting, and say, let’s just cross our fingers crossed and hopefully it won’t get too bad before someone gets a better algorithm. And the third is some real radical surgery stories, um, which I think is a bucket that off chem kind of falls into. And maybe, um, you know, maybe. Maybe Merkel’s research. So I’m not sure.

Eric: Merkel’s research, like, definitely has more of a. More of an incremental deployment style kind of like flavor than off chem does. Uh, you know, and I think Dennis, Dennis Jackson’s, what does he call it, like about the compressing intermediates by just removing a bridge. There it is. Yes. You know, falls more into like the first kind of into kind of like that paving the way kind of category, right. For version two. My fear, honestly, out of fear.

Eric: But my sense is, even with everything we know how to do that is clever. The post quantum search are going to be so grim that we’re not going to get much in the way of deployment with the current state of algorithms. Even if we went all the way to more culture research and bridge search and everything, are people really going to suck up MLDSA in the system? And if the answer is, and it’s one thing to be like, you know, Google and Facebook and some other people have some MLS MLDSA certs, and if you like, and like a 1% of connections, they agree to negotiate them. And it’s quite another to be like, basically everybody in the world has an LDSA cert. And, you know, the security properties of this signature thing are really different from the security properties of the key exchange thing, which is to say that, you know, if you and I do Mlchem, then we are protected and we don’t really care what everybody else does. But in order for you, the server, to get benefit from having ML DSA, you need me to meet the client to refuse to trust an ECC certificate from you. Right. And so, like, in the sort of like big version of that, you can’t, the clients can’t do that until, you know, until basically everybody in the world has ML DSA cert in the smaller version.

Eric: And I had some like, hand waving about this in a recent newsletter post. You know, maybe there’s some way for sites to be like, well, maybe you generally would trust an ECC certificate, but for me, I want you to like, only, only do ML chem, right, as you get some value incrementally, but.

Deirdre: Or MLDsa.

Eric: Yeah, yeah. But at the end of the day, my sense is like, with the current state of clay, we’d be lucky to see, you know, 10% deployment penetration in the next five or ten years. And then the question is, how much does that move the ball forward? You know, in the world where suddenly we have a post quantum computer, how much does that actually accelerate the transition versus just starting the minute the post quantum computer is released? I don’t think anybody knows.

Deirdre: Right.

Eric: Because, I mean, the day that if that happens, suddenly a bunch of unpalatable alternatives start to look a lot more palatable and suddenly it is like, well, I guess we do have to absorb 10 certificates because it’s that or nothing or nobody does that now.

Deirdre: Yeah. I think having at least in anyone that has to interface with the us government, having fips, compliance and anything else that needs to be sold to the us government is really pushing at least support, if not adoption, for this stuff. And in my opinion, if we have at least support for these things, we’ll be in a better place. But of course with signatures, it’s less of a urgency against an adversary so much as we know it takes so long to get the ecosystem to update to anything. So trying to get something working now so that you can at least have the option of like an ML DSA certificate supported in TLS 1.3 and so on and so on and, you know, validated by a client. Just having that available will, I think, assuage a lot of people, even if it’s going to be like, I don’t doubt five to ten years for significant adoption is probably optimistic.

Eric: Well, I think in the absence of more thrust. Right. I mean, the problem is it’s just not like a high priority for anybody because the algorithms are bad and we’re not supposed to play in any way. So it’s kind of like, you know, I mean, that’s key exchange, I think is a high priority for people when they’re pushing it. Right?

Deirdre: Yes, exactly. Yep, that’s. The hybrid chem is supported by Chrome and Cloudflare. And I think Cloudflare is already reporting 13% of their negotiations are using that hybrid chemical, which is amazing. It’s only been out for a couple of weeks and it’s already at 13%, I think, of their TLS 13 connections or whatever they said, that’s amazing. So that’s very good. And we can afford that. It definitely helps that ML chem is not that much bigger.

Deirdre: It’s about the same size as say your RSA or something like that, or big things like that in your key agreement, in your handshake for signatures. Yeah, you know, smooshy. I don’t know.

Eric: I mean. I mean, you’re much more tuned into this world than I am. Do you have some sense that like there’s stuff on the horizon? I mean, do we have hope?

Deirdre: We. This is what’s really frustrating for me because we do have hope in the one isogeny based signature scheme that is part of the NiSt post quantum on ramp for signatures because there’s like 40 other signature schemes. Because they’re like, we want something that’s not lattices and we want smaller, please. Skisign is extremely competitive there, but the community of isogeny cryptographers that contribute to ski sign are just kind of floating out there. They didn’t show up to the latest NIST meeting and 39 other signature scheme teams did, including Mayo and UOV. These other schemes that are not based on multivariate or whatever, that are competitive in their own way either. Very small signatures and huge keys that make them more appropriate for different parts of TLS, blah, blah, blah. I honestly think that TLS, sorry, ski sign would like hit the sweet spot of all these axes of small signature, small key, decent enough performance to make everybody happy.

Deirdre: But I don’t know where the ski Stein people are.

David: Well, the ski sign on paper needs a 10,000 x performance increase to be the latest. Yeah, actually that sounds more reasonable to me than just like coming up with new math. Right. We’re a lot better at computers than we are at new math.

Deirdre: The latest implementation rumors I heard is like two milliseconds or something like that. It’s definitely within the realm of competitive and it’s like they’re not out there, they’re not aligned and pushing ski sign, and I think they’re missing an opportunity and we’re all, yeah. So I don’t know, I keep sort of needling people and be like, hey, what’s going on here? What’s going on? And just, I don’t. Everyone’s kind of going off in their own corner doing other isogeny stuff, I guess.

David: UOV is a particularly interesting kind of point in the design space because it has signatures of size in between an elliptic curve and RSA 128 bytes, but it has 66 kb public keys, but that’s actually like fine for pre distributed keys that aren’t that big. So you’re not going to be seeing a root store of UOV keys where there’s like 100, 200, 300 keys.

Deirdre: And the math is attractive, but you.

David: Could see a CT log list of UOV keys where there’s only like five to ten of them.

Deirdre: Yeah.

David: And it’s based on an NP hard problem, I believe, which is like actually somewhat novel for cryptography.

Deirdre: Yeah, the math is nice to implement and it’s also pretty like it’s just kind of stood the test of time, so it’ll be interesting. And there’s probably time between now and, I don’t know, d day, Q day, whatever.

Eric: So how did you feel when, like, for a moment, we were all worried that actually there was a good attack on lattices?

Deirdre: Oh, I was. I was looking at it, and I was taking the authors on their word that this will affect basically all lattice constructions more fancy and more complicated than signatures and key agreement that the, like, the ratio of the queue to the who’s he, what’s it. The size of the whatever. If it’s close, you don’t have enough room to leverage an attack. Which was the case for Mlchem and MlDsa and falcon ish thing. This basically the non fancy shit. Other fancy shit, like anonymous credentials, fully homomorphic encryption, which is a lot of the things that would have put post quantum public private information retrieval on notice, would all be just completely blown away. And if that were the case, it would basically be like, we still have a ton of cryptography out in the world.

Deirdre: Like, a lot of the private, like the private relay stuff uses things that are basically blind signatures in one way and anonymous credentials in another way. Trying to find a post quantum version of any of the more fancy crypto pretty much with all relying on fancy lattice settings. And it was just sort of like, well fucked. Like, we have no ideas of how to do any of the fancy stuff without lattices. There might be a couple of solutions, but they’re incredibly not scalable and performant and adoptable. So that was where I was going to be, which is like, this is going to make people trust in lattices less. It might slow adoption, but honestly, we would still have ML DSA and Falcon and Mlchem. And then two weeks later, two independent researchers were like, we found a bug in the algorithm.

Deirdre: And the researcher said no. Yep. Nope, this is broken. My algorithm does not work. So that was a big. But people are still looking at the thing and trying to be like, okay, maybe this part was broken, but there are some interesting ideas here. And there’s like a leap here that we want to understand fully. And so we don’t really know the blast radius yet of the paper.

Deirdre: Even if the specific algorithm it was proposing against lattices is considered broken and not real. Oh, not good. And this completely motivates why NIST is cool. We have three post quantum signatures based on lattices. We have some older ones post quantum schemes based on lattices. We have some older ones based on hashes that we’ve trusted for a long time, but are only usable for certain scenarios. That’s sphinx and XMS or whatever. We want something else not necessarily based on lattices we don’t want to put all our eggs in one basket, and we can’t use these hash based ones for TLS, for example.

Deirdre: So this underlies that. Precisely.

Eric: So what do you think we do if the non works? What do we do if someone describes a post quantum algorithm for breaking all this fancy stuff?

Deirdre: Sorry, quantum mechanical.

Eric: So we have a quantum computer and, like, we have no poker photography at all.

Thomas: Global Kerberos.

Eric: Yeah, exactly. Yes, I.

David: Yes. Kerberos.

Deirdre: Yeah. Go, go talk to Adam Langley about that. Enough. Cryptographers in my world are basically like, I don’t think we live in that. Is it like minicrypt or the other, you know, algorithmic complexity world? I forget what they all are and the differences between them, but I do. Like, it’s basically arguing that one way functions do not exist or something like that.

David: Minicrypt is no one way functions, but you still have symmetric crypto, so it’s only secret key crypto. It’s no public key crypto.

Deirdre: Okay, cool. I don’t think that’s true. I do not think that there will be one quantum algorithm that will kill all of the schemes that we think are, at least right now, post quantum secure, because they are based on very diverse problems. Um, you’ve got your codes, you got your multivariate, you’ve got your lattices, you do have your isogenes, and you’ve got your hashes. Literally, worse comes to worse, we literally have to tell everyone, like, cool, you’ve got your stateless hash based signatures. We’re going to teach you how to use them, and it’s going to suck, but we will have them. And, um, so there’s, at the very least that. But I do not think that we’ll actually end up in the all you get is hash based signatures world and.

Eric: What, Merkel puzzle boxes for key exchange?

Deirdre: Yeah, or something like that. But it’s really tricky because we have these teeny tiny quantum computers and we don’t have higher level. I mean, there’s lots of people working on this. So when someone puts out a paper that’s 60 pages long and says, yes, I’ve broken all the lattices, modulo the simplest possible constructions, it takes the whole world staring at this paper, the whole cryptographic world staring at this paper for two weeks, because only ten of them know quantum computing and quantum algorithms and lattice cryptography, enough to basically know what this paper is arguing and then tell you whether it’s correct or not or there’s a bug or not. And part of that is, we literally don’t have real quantum computers. That’s why we’re planning to roll this out yet, because we’re not there yet and blah, blah, blah. But it’s really hard to falsify anything with a classical attack algorithm or attack algorithm that can run on a classical computer. You just go implement it and you just try and run it.

Deirdre: And if it doesn’t quite get there, you throw more computer at it and see if it’ll work and you can just try it. And we can’t really do that with quantum algorithms yet. We’re in this hazy space. That’s unfortunate. Okay.

David: There’s also very little mathematical cryptanalysis in the United States as an artifact of how the university system works. Here in the US, cryptography is usually in the computer science school. That’s also true. Europe. Cryptography is often in the math department, and math departments in the United States often do internships. If you do number theory and you are in a math department in the United States, there’s a good chance that you at some point have done an internship with the NSA, which then prohibits you from working on a variety of topics. And so just like, structurally, you end up with not a lot of people doing mathematical cryptanalysis in the US, let alone quantum mathematical cryptanalysis.

Deirdre: Yeah, it’s annoying. And we want to talk to Nadia more about that sort of thing in the future, I think. Okay. To do a hard 180. Not 180, but like, away from post quantum and back to transparency. So we’ve had certificate transparency, which is signatures over signatures, whether it’s in the merkle tree, which is what we’ve deployed or not, which is arguably what we should be deploying now. And we have key transparency, which is basically all these messengers that are end to end encrypted are deploying, unfortunately, besides signal. But I hope they will get there for your WhatsApps and your Facebook messengers.

Deirdre: They’re deploying large scale key transparency that uses more complicated cryptography in there to help audit it and check proofs and get a little bit more privacy, I think, about what’s in the log so that people can’t just get people’s phone number for random queries and get all their public keys and stuff like that. They have a little bit other stuff in there. People have been talking about binary transparency for a bit, and I am only aware of Android having binary transparency for the Android Os.

Eric: That sounds right, yeah.

Deirdre: Do you have thoughts about binary transparency? And like, do we. Do we have a way to deploy simpler, nicer to run binary transparency without all these merkle trees that gets us where we want to go? Or is it like, nah, it’s a different security model.

Eric: I was really bullish on this for a while, and then no one seems to make much progress on it besides Android, and I’m not quite sure why. I know why we didn’t make much progress on it when I was at Mozilla, which is lack of thrust. Maybe it’s worth stepping back and saying, what is the nominal security guarantee that BT provides? And it’s related to the CT guarantee and the nominal BT guarantee. So I think if they just take a step back, you say you got some source code. The source code gets compiled in the binaries and binders get shipped to people. What BT does is it at least nominally ensures that the total set of binaries which are possible to run in your computer for a given application, whatever that application, operating system, whatever is known to the world. And in the simplest version, it’s merely the cardinality of those is known, namely that there is a set of them and there’s no more than n. Right? And so the attack that Bt concerns us off with is that some attacker who controls the update channel on your computer, for instance, the vendor is unable to substitute a malicious binary for a non malicious binary.

Eric: I think it’s pretty unclear exactly what value this provides, except relatively narrow circumstances, which is to say there’s a lot of stuff on your computer and the vendor is trying to be malicious. Why they have to make a different binary? Why can’t they just make a binary that is vulnerable? It’s not clear to me what Bt alone provides in terms of security. I think what Bt is intended to be. I mean, I used to talk about this is as part of a broader ecosystem that includes your producible builds and open source. And then the story becomes much more straightforward. And the story becomes, look, the source code is public. You can verify for yourself that you can verify for yourself, or someone can. You actually can’t because it’s too much work that this source code compiles down to a given binary.

Eric: Here is a list of all the binaries that anybody can possibly install, and you can map each one of those to source code into blockchain. Then the theory becomes, at least in theory, that someone had an opportunity to review the source code and verify the source code was non malicious, and therefore you can convince yourself all the way down the chain that they haven’t had a substituted binary. That was bogus. I think that is a somewhat more plausible security story. I say somewhat more plausible in the sense that like any modern piece of software is like full of vulnerabilities and finding them by inspection of the source code is incredibly difficult. And so like, so say you persuaded yourself that like this is like a legitimate copy of Firefox or Chrome. The next version of Firefox or Chrome is going to disclose like pile of vulnerabilities that were already in that version of Firefighter Chrome. Like every version, every release of one of these products has like a list of like all like the memory vulnerabilities that were in the previous video version that were all there and nobody found before you this minute.

Eric: So it’s clearly the case. It’s not clear what a great set of security guarantees it provides. I think this is particularly weak in cases where you have like an end to end system which is completely controlled by the vendor where like you can’t see the source code, then it’s just the whole thing has to fall apart. And I think if, you know, circle back. Do you probably remember when Apple had this stuff that was supposed to be about CCM detection for stuff included in iCloud? And they went to quite a bit effort to demonstrate that, like you convince yourself that the list of hashes that they were checking against was a valid list. And I was like, Apple controls the entire stack hot the bottom. If you don’t trust Apple, lie about this, why do you trust anything they’re saying? Right? So I think, I think that, I think I do, I do like worry about, you know, more and more software now has a property that basically vendors can auto update anytime they please. And I worry about the vendors ability to auto update with malicious binaries.

Eric: The question of other update with customized malicious binaries was really what BT is intended to prevent, I think is a sort of weaker kind of threat model. So I guess I was pretty enthusiastic about this for a while. And I think, I still think it’d be good, but I think it’s sort of lower on my stack of priorities than it used to be. And just to go back to, I think even circle back to what you were saying, David, about people want to use a CT log for thing. When we first looked at this, I think it was Richard Barnes, um, was like, well, we don’t want to stand up a whole new log. So what we’re going to do, we’re going to basically get a certificate that is called like hash of binary dot bt dot mozilla.com. and then we’ll just like get the issue with Le and then it’ll be in the CT logs. Look at the CT log and you can verify the whole chain so it doesn’t have a new log.

Eric: So you know, we never actually did this as far as I could tell, you know, except maybe find like a test or two. But like that was like the level of like of glue gas people were getting excited about back then.

Deirdre: Yeah. Now you’re giving me too many ideas of like I could just like shove things in these CT logs and like they’ll just do all the work for me. I don’t need. Yeah, if all you want is like signatures of signatures basically, or you know, whatever. That’s interesting because like, yeah, like some, like some of the threat metal is literally like do you trust the vendor, the, either the OS vendor or like an application vendor to give you the right one. But I’m also sort of thinking of like if you’re in a network for example, and someone is tampering with the downloads, you get to upgrade your os or upgrade your application. There’s other ways to do transparency that basically involve gossip. It’s sort of like having your own, like I’ve seen these binaries or I’ve seen these keys for example.

Deirdre: If you’re gossiping the keys that you’ve seen and you like vouch for them of like I see them over here and you gossip it to your friends or anyone you connect to and if you’re, you know, local network is serving you a very suspicious version or hash of a binary or patch update that no one else sees or doesn’t present in the binary transparency log, you go, don’t do that. Or the application in theory would be like checking these gossip logs or checking the transparency log and say we’re just not going to apply this update and we’re going to notify you that we’re not updating. Maybe go to your home network before you update.

Eric: Hopefully the updates are signed. Shouldn’t be possible, right?

Deirdre: Exactly.

Eric: I mean I’m not sure how many people actually sign their updates, but they should. Yeah, they should though. That of course creates a new problem of like managing those keys, which is like, can be like, yeah, you know, could be like quite an operational problem for a smaller shop.

Deirdre: Yeah, but I think it’s like there’s that sort of how much do you worry about that sort of attack model versus the literally what you just said, which is like software is full of vulnerabilities, even like highly resourced teams with very like, you know, we understand how important it is to have these piles of software to be secure. I think Chrome patched a zero day in the wild just recently. It’s hard to write secure software. So why would a attacker go through all the effort of getting in the way of what downloads you’re getting and serving you a bad one, and then possibly ticking off if you don’t have transparency? Maybe it wouldn’t detect it, but also they could do something undetectable and literally leverage a zero day. And the answer to that is, well, sometimes those things are more expensive to do and require more skill than literally just doing a network block and just making sure that everyone who’s downloading chrome updates or WhatsApp updates inside your network are downloading the crap ones that you did one time costs to target all of them or something like that. So it’s all this belts and suspenders for all these various things.

Eric: We also should mention that it’s not just the binary now that any, like lots of modern software is remotely controlled by the vendor, and every single one of those things is potentially the number of different knobs that a modern piece of software has, each of which have turned the wrong way, can be very, very dangerous, extraordinarily high. As a concrete example, what if you update the rule list is a rule that’s compiled in. It’s just a file. I don’t know what Chrome has, but do you have an opt for testing that suppresses certificate validation? So, like, you know, basically every single thing that has had any impact on the behavior of this, of the system has to be treated the same. The same treatment. Um, so, like, I, I think you’re right. I think this is something which has value. I just.

Eric: The question is like, how much, where are we going to spend that sort of relatively, relatively small amount of research we have on this? And I, I sort of, when I thought it was easier, I was more enthusiastic about it. And now it seems to be. Everyone seems to like everyone has done it, as you say. The only people that CG Chrome, it seemed to take them a really long time. And so I sort of wonder like, how hard is it really? And is the juice worth the squeeze?

David: Yeah, it was worth the squeeze in like PKI though, to bring it back to CT. Right? Like, we were in a world where like the best way to attack Google users was to like own Diginotar and man in the middle their traffic. And now, like that, that doesn’t.

Thomas: Yeah, we’re also like, we don’t talk a lot about like the CT wins, but there are like, oh, yeah. Not just like the conceptual response to Diginotar, but I think cas have been.

Deirdre: Distrusted because of, oh yeah, definitely. I’m going to ring us all the way back to the beginning, which is doing things securely on the web. We’ve talked about network layer security, we’ve talked about transparency, we’ve talked about privacy. How about I just want to write a web app or a website and make sure that I can do it with a secure software model so that say I want to do, I don’t know, like an end to end chat or I can do end to end encrypted video calling and things like that. Sort of. If it’s ephemeral, what if I wanted to deploy software with some of the security guarantees that I get if I deploy an app in a web, in an app store for your iPhone or your Android phone? I think there’s been some stabs at trying to not quite do things that are binary transparency for the web, but just sort of like web app security improvements. Do you see anything in that arena? Do you think that’s actually going to improve web devs lives anytime soon? Because I just keep seeing WhatsApp deploying. Make sure you install our Chrome extension so that we will actually check that you’re running the correct version of our website that’s connected to your WhatsApp account so that you can run WhatsApp web securely.

David: And that’s dependent on Google or meta and Cloudflare’s relationship. And it effectively has the property that all it does is verify that meta and Cloudflare have not. One of them is not serving two different copies of the app to two different people, which is a pretty weak community.

Eric: Yeah, I mean, I think you can, like, obviously you could, once you have the add on, you could make something that was better than that, though. I see why. I think, you know, it’s probably to dial back to threat model for a second and talk about threat model right there. I think there are two threat models he has to be concerned with, right? One is, I mean, the traditional web threat model, of course, is like this. I can do anything at once and only is concerned with like other people who are unauthorized, you know, messing with the site, hence like CSRF and XSS and stuff like that. And so then there’s kind of like, I think in the zone you’re talking about a weak threat model, a strong threat model. The weak threat model is I don’t want any clown who controls access to the Facebook site or the WhatsApp site to be able to deploy things that compromise the user but you’re trusting Facebook at large, right. And so, like, an example of that would be the Facebook, you might say, why don’t I build something that made it so that random engineers couldn’t compromise your chats? But so you can imagine a code signing kind of thing, but without a transparency mechanism.

Eric: And the idea would be to gate to gate access to the site. So, like, ordinarily you have some site where anybody can push to who works for the company, and then you say, well, actually, only three people are allowed to, like, sign no binary bills. Right?

Deirdre: Yeah.

Eric: And I think, you know, and you see that, you see this, you know, in general, software companies where you have like, a team of developers and, like, hundreds of people can convince the thing, but only five of them can do a release, right? Yep. That’s like sort of the weak, the weak model. And the strong model is, I’d like to be assured that no matter how malicious the site becomes, they can’t access the data. I think that’s a much like, and I think there’s also kind of like a kind of forward secrecy version of this, which is to say that, as you indicated earlier, the properties that and encrypted video calling are supposed to provide is that as long as the site isn’t compromised at the time during the call, that no one can read the data. Right. And that just doesn’t work for messaging because of the persistence properties of messaging. But the site that WebRTC does not provide is the placement compromised at the time you’re making the call that it’s like, game over. Right.

Eric: We spent some time actually trying to address this problem with WebRTC with, I think, very limited success. We designed a whole mechanism where the audio and video be isolated from the site and there’ll be digital signatures and blah, blah, blah, and authentication and kind of tried to build something. I’m like, no one wanted it. Um, that tells you, like, you know, now you can say, like, we misdesigned it, which is entirely possible, but we took a stat and it wasn’t like, you know, talk about momentum, right? It wasn’t one of these things where people would sort of call us and say, well, this kind of stinks. Could you make it better? It was like before, like, well, cool story, but we don’t want it. I hear there’s interest in this, in the MLS world in terms of what’s called WebMLs, which is the idea of embedding the MLS stack into the browser. There’s noise about this. I don’t know how much signal there is, but there’s noise about it.

Eric: I think what you’re going to see, I’ve seen discussions of designs. Emily Stark had a nice post looking at this. I think that was really leaning into the BT version of this. Namely if I remember correctly the idea was to validate the binaries. There’s another version of this that is a sandboxing version where you say, well no actually this stuff has to live in some isolated sandbox and that’s the part that’s protected and everything else is, is accessed. The complicated problem I think is that websites are very dynamic objects and if what you’re going to say is actually, and what I mean by that is all those different binders that get loaded and there’s people change them constantly much more than software releases. And so what kind of statement are you actually making? And David, you said a minute ago you serve one version of one person, another person. That’s like the normal state of affairs on Facebook is like they’re running EV tests, it’s like one version against one versus the other.

Eric: Right? And so what, you know, the statement you would allegedly make about binary transparency on an application is there’s a very small number of copies of the application and you know you got one of them. And the statement you would make about, about binary transparency in a website is we serve 500 different copies of the website in the past 20 minutes and you got one of them. Like I understand how that really works. And so I think we have to step back and ask what the sort of the security property trying to enforce is before we even talk about mechanism. And this is one reason I think people are trying to think about trying to cabin the pieces of a system that can access the dangerous parts. And so if you said for instance that, and we’re just spitballing here, obviously, but all the crypto was in some box that you actually can’t access from the JavaScript and all you can access the content, then maybe it’s a slightly better situation. Probably not that much better, maybe a little better. So I don’t know, I’m actually pretty worried about the whole thing and I think our story right now is if you really want that, you have to run an app.

Deirdre: Yeah. And on like the, you know, you’re serving 5 million like recently changed versions of the app because of like some of these are, you know, we just have a bunch of options and we want a bunch of options reasons from a application server. Part of it is also like the dynamics of the application platform as well, which is like you just inline things into your web page, however you want to do it, either into your JavaScript, bundle into your HTML that you’re actually rendering on the fly. And that’s how you configure the thing for a specific user that you’ve identified via cookies or anything, any other identifiable information, maybe something in the URL. And then on the other side in a platform like a mobile app, it’s costlier to configure something for every individual user. So you don’t, you do like one per language or one per region or something like that, and then you shove in the configuration via like a configuration blob that you either download or cache or, you know, whatever. I could definitely, I’ve worked on web apps that are more like the latter than the former, but the former is also completely viable in the world. And like depending on your scale you might favor that because you can just do it all dynamically with compute on the edge and you just offload things like there’s a bunch of stuff that can favor one of the other.

Deirdre: But I wouldn’t be surprised if there were developers that lean towards that application model because it lets them to do this development a little more securely on the web.

Eric: I do want to say I don’t think the end to end encryption, even if it’s entirely in JavaScript, is entirely worthless. I don’t think it’s great. But if you look at the way you say what is your concern? And your concern is very different. If all this data is in the clear and logs on the server, then if you have to actually serve malicious JavaScript with somebody to exfiltrate it, that’s a very different kind of concern. And so I do think that the more that we did end encryption, even on the client, even if you are ultimately trusting the site to provide you with valid JavaScript, that’s an incremental improvement. And then maybe something later we can figure out how to make sure the site Fridays you have all the JavaScript, but there still be incremental improvement. And I think the theme of this entire conversation has been incremental improvement is the way, the way the Internet advances, right?

David: Yeah, I think there’s room to write a systemization of knowledge paper or similar that just attempts to tease out the full spectrum of security, property and threat model for the vague space of integrity and binary transparency for applications and web applications specifically, because I think these things end up being really difficult to talk about. And there’s things that are obviously good. If that one video call provider was able to do mls on the web. That would clearly be good to have the video be encrypted on that call for the life of it. But that’s a different threat model from signal, a different threat model from Facebook messenger. And being able to articulate that better, I think would be highly valuable.

Deirdre: Not it, I’m not writing my white paper, but it’s interesting because Facebook messenger just rolled out end to end encryption for all of its clients, including its web clients, and they forced it into the existing expectations of their user base. They had a deployed user base and deployed feature set and they didn’t want to just take all of their users and just be like cool. Now you’re encrypted. And also none of these features work anymore because you’re encrypted now. And we can’t do all of this fancy shenanigans we used to do on the server for you because we don’t have access to your messages. They didn’t do that. They re engineered a whole ton of stuff to keep the feature set complete, but it also forced them into solving a whole other backup and storage of plain text information and securely backing up plaintext of messages that they call labyrinth. They did a whole bunch of other engineering that they had to do securely to route around this sort of stuff.

Deirdre: And it’s just, yeah, figuring out whether actually that could be a whole other conversation about where’s the margin between making these incremental improvements and getting this bang for your buck better than nothing improvements versus it is worth it to do a very large investment in a significant overhaul with the caveat that the web and the Internet is not the same as meta, who controls all of their endpoints and controls not the web, but they control facebook.com and Messenger.com and all this sort of stuff. They control a lot of the stuff to make that sort of huge overhaul happen that’s not necessarily doable on the Internet barge anyway.

Thomas: Yeah, it is my feeling that we could probably go on for six more hours. So I guess if I was going to wrap this up one way, I would ask Eric. So you’ve been doing this forever. So we’ve come a ways from standardization and thinking about threat models, from the battle days in the late nineties where the IETF chased Phil Roguelway off the list because he was pointing out the CBC thing to where we are now, where we have a panel of people doing formal verification stuff. I’m guessing you would tell me that you’re pretty bullish about how things are looking for standardized interoperable cryptography done by groups of smart people and like a linkage between academic cryptography and practitioners and the people that are doing the standard stuff. Right? Like I’m getting like an optimistic vibe from this, even though we’re talking about like failure modes and everything, right? Like, am I crazy?

Eric: No, no, I think I’m very optimistic about it. I think that’s, I was kind of worried, frankly, after one three that we kind of, all those people were going to run away. And a side effect of doing mls is we kind of managed to keep them involved and in fact, I think managed to actually have in some ways a dynamic where they were even more involved, where they were doing a lot of the main design work, something that was really fantastic. And I think that things like this review panel, I think are another way to keep them involved. I think that the thing we took us forever to figure out was how to make it a win win, how to make it so they weren’t just doing this community service and how they were getting something out of it. I think that started to happen. Now people recognize this is real value. I think, as I sort of indicated, the next frontier is showing that not only can the people who are effectively Pinochle academics make progress in work, but that new primitives have a place in standardized protocol deployment, and that we can take those new primitives and bring them out in a finite time.

Eric: Because when I talk to academics, what theyre most concerned about is impact. So if you build something cool, they want to make sure it goes out there. And so if we can tell them, look, this is great, and with a fairly modest investment from your side, we can actually get that out there, then I think that theres a good story here and I think we have several success stories, voprfs and health, we know preo and things like that as well. And so I think if we can keep that rolling, I think we’re actually going to be in pretty good shape. I think it’s like, I’m pretty happy with how this turned out. I felt like the communities were not talking well and they’re talking a lot better now.

Thomas: Well, that is a good place to leave it. Eric, I’ve been a fan of yours since I was hacking on SSLDump. It’s been very awesome hearing all this stuff and getting to talk to you. So thank you so much for taking the time to talk to us.

Eric: Thank you for having me.

Deirdre: Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek and David Adrian. Our editor is Netty Smith. You can find the podcast online @scwpod and the hosts online at @durumcrustulum, @tqbf, and @davidcadrian. You can buy merchandise at https://merch.securitycryptographywhatever.com. If you like the pod, give us a 5 star review on Apple Podcasts or wherever you rate your favorite podcasts. Thank you for listening.