WhatsApp has announced they’re rolling out key transparency! Doing this at WhatsApp-scale (aka billions and biiillions of keys) is a significant task, so we talked to Jasleen Malvai and Kevin Lewi about how it works.
Links:
- https://engineering.fb.com/2023/04/13/security/whatsapp-key-transparency/
- https://github.com/facebook/akd
- Parakeet: https://eprint.iacr.org/2023/081.pdf
- CONIKS: https://eprint.iacr.org/2014/1004.pdf
- SEEMless: https://eprint.iacr.org/2018/607.pdf
- WhatsApp Security Whitepaper: https://www.whatsapp.com/security/WhatsApp-Security-Whitepaper.pdf
This rough transcript has not been edited and may have errors.
Deirdre: Hello, welcome to Security Cryptography Whatever. I’m Deirdre.
Thomas: I’m Thomas.
Deirdre: And we have two special guests today: Kevin Lewi. Hi Kevin.
Kevin: Hello.
Deirdre: And Jasleen Malvai. Hi.
Jasleen: Hi.
Deirdre: Hi. Welcome. And we have our special guests on our pod today because they have been working on WhatsApp’s newly announced key transparency deployment, which is extremely— yeah, woohoo— is extremely exciting because, I work in the privacy and you know, field a little bit. and whenever you have anything involving end to end encryption, you have to worry about, trusting the keys that someone gives you. Because usually the way that this works in an end-to-end encrypted system like WhatsApp or Signal, you just have like a hash or a code of someone else’s public key and you just, it’s sort of up to you to just sort of like check it and see if it’s like, fine, which is like, you know, if it’s all pairwise and the keys don’t change that often, that might scale.
Thomas: the biggest problem in secure messaging, I think, period. Like, I think every time we talk about secure messaging, we talk about like post-compromise recovery and ratchets and like, you know, how modern the, the constructions are. But like, the problem is that if you don’t know precisely who you’re encrypting to, then you might as well not have end to end encryption.
Right? Because, you know, if the server can swap out keys, or whatever the coordinator is in the system, they can swap out keys and you won’t notice? Then they can make you encrypt anybody transparently, right? There are secure messaging systems that have been built that have not only no key transparency, but no notion of what the keys are at all in the ux, right? It’s just like a, a name that happened to have logged into a webpage somewhere or whatever. So like, and this is, it’s, this is like still like a really big problem in Signal too, right? Like is just, what is the UX of, how we’re going to know who we’re talking to? And as I understand it, we’re talking about one big class of solutions for that problem today.
Deirdre: Yes, or at least one big class of solutions that has had modest deployment? Scale? Or modest, like I, I think about * transparency, <insert your thing> transparency, the biggest successful deployments used to be certificate transparency. And those are big, long Merkle tree based logs that your certificate authority will, is now basically required, uh, to . Maintain and the browser or whoever is checking your, your TLS security certificates, will check that you have a proof when you make your connection and they check that the thing that you’ve proved is included in a log. They check that your, you know, SCT verifies or whatever, which means that it’s in a log somewhere. You didn’t just hand someone a random certificate. It was actually issued by a certificate authority in the past.
Keys, for end-to-end encryption, I don’t know if any other system has deployed a key transparency solution anywhere close to the scale that WhatsApp is trying to achieve, which is over 2 billion users. And that’s not even including how many multi-device keys those users have and how often those keys get rotated because, keys. You know, phones fall in the, in the ocean or the toilet all the time, or you know, you reinstall WhatsApp and you have to, you know, redo a key.
Kevin, can you talk a little bit about what the problem space you were trying to solve for when you were trying to figure out a key transparency solution?
Kevin: Yeah, yeah, sure thing. And I, I do think we should credit Keybase actually as one of
Deirdre: Oh
Kevin: who, who who actually did launch key transparency. Obviously like a different level of scale, but, um, a lot of credit goes to them for at least deploying something that tries to solve this problem.
But yeah, also in general, just to touch upon, I guess, the way we think about key transparency, why it’s important, you know, it, it, it’s exactly this issue of like, if I were to say contact you Deirdre for the first time on WhatsApp, how does that look like? I first ask for your phone number, and then I go into WhatsApp and I type in your phone number and it gives me a public key.
And then I use this public key, along with my public key to establish a secure channel between us, but critically I’m trusting WhatsApp to have given me the correct public key associated with you. Right. And so the way this is handled in encrypted messaging apps usually is there’s this QR code scan, key fingerprinting it’s called, which is basically like, we hash the pair of our public keys together, and then if we like meet up in person or if we like fire up a Zoom call and like show each other the QR codes, then we can like verify this.
Right.
Deirdre: And I just wanna like prove the point that, you think that this would be okay. Even the most like paranoid cryptographer, privacy people have just been like, fuck it. Like, it’s fine. Just click accept. Even, even now, like even if they completely know the threat model, blah, blah, blah. So yeah.
Kevin: And it’s kind of this like, paradoxical thing too, right? Because like, if we didn’t have that UI, if there was no way to check your key fingerprint, then I’m not even sure we should call this like, a secure end-to-end encrypted messaging app, because you don’t know who the public— like you don’t know what the public keys are.
And so, it’s like, okay, fine, we’ll have this UI that we know, like, at least from when I check, no one actually uses for all their contact— there’s really low usability around this UI, and yet that is like kind of the linchpin, like, oh, we just have this UI and now we can call it secure end-to-end encrypted messaging.
And like, I think we as a community might need to like, think a little bit more about that and whether or not we should slightly raise the bar for what we call end-to-end encrypted messaging and if, if you really— the existence of that UI is enough? And also, you know, as you mentioned, it gets more complicated: say you have a group chat with your family members and you have like, I know, groups of size 10 or 20, are you really gonna be checking every pairwise set of keys?
You also kind of need to like, either meet up in person, or have some sort of pre-established channel that you trust for, uh, communicating these key requests to someone else. And there’s also the problem of, okay, what happens when you get a new phone? These public keys are gonna rotate, you add your devices. Do people redo this check every time someone gets a new phone?
Most likely the answer is no. And so that’s kinda the motivation behind key transparency. We wanna make this a more automatic process so that you don’t actually have to meet up in person. And now I think there are definitely gonna be, I don’t think it’s possible to build a system that will completely replace QR code scanning.
Like at some fundamental level, you checking the hash of your public key someone else’s hash is going to be better than any automated system, in terms of pure security. But, for the people who can’t do that, or in situations, you know, like really high profile situations, imagine there’s like a, a journalist in a foreign country who wants to, who is contacted by like a whistleblower, who wants to remain anonymous, right? Like where meeting up in person may not be an option. And, you know, contacting through one of these, like Signal, WhatsApp, whatever, their first point of contact, then how you know you’re talking to the right person, am not some man-in-middle’d version of it?
So that’s really the scenarios we’re trying to target with this, and how can we do this in a way that doesn’t require meeting up in person.
Deirdre: Yeah, and also it’s, it seems the way you’ve been able to solve it is that, it’s kind of done in the background , it’s done automatically. And so the only time you’re asking a human to do something or flagging something for a human to do, is in the exceptional case, which is in theory, much, much, much, much lower and more likely for something, bad or something that they have to take exceptional action is happening, as opposed to, you know, my dad got a new phone, or my friend reinstalled Signal, or WhatsApp, and you know, so on and so on. It’s like, it’s just a nuisance. It’s not actually a possible man-in-the-middle, futzing with my, with the keys that I’m being sent over the, this new, uh, trying to establish an end to end encrypted channel.
So this work, you’ve implemented some code and it’s all open source and it’s all in Rust, which I love to see. And so I was looking at all the Rust doc, and I was learning about all the different kinds of proofs that you were including for this. Jasleen, can you kind of tell us about how this solution that WhatsApp has implemented and is getting ready to deploy, evolved from kind of the past few years of key transparency literature?
Jasleen: Yep. So there’s a lot of people who would, uh, probably agree that CONIKS was one of the early academic works in this area, and that solution basically evolved from this sort of certificate transparency type notion, where you basically say, okay, let’s just say we have a tree and, and that tree basically is committing to a dictionary that maps usernames to the public keys.
And then you basically keep a record of all the changes that are happening to each user’s public key. Now the notion of transparency, and we haven’t maybe stated in quite this way, is basically, what I believe is my public key is what you believe is my public key. And, and they basically formalize this notion and they call it something called not-equivocating on what somebody’s public key is.
And so that simple solution already is, is much better than not having anything at all. But the thing is, I mean, there was some notion of privacy in this work, and also there is fairly heavy requirements for me to monitor my own key in order to be sure that at any given time, it’s not changing in a way that I don’t want.
Thomas: Uh, I’m gonna be, I’m gonna be wrong about this, but I’m gonna try to summarize what I understand comics to be, right, which is like we have a Merkle tree of, the millions and millions of users keys. So just put ‘em in a tree. Right? And then there’s like a, a ticking clock of epochs that move forward, and at each epoch, the server restates the Merkle tree somehow, like not much has changed in the tree, but there’s like a new sort of tree.
And we have like a rolling hash that goes from tree to tree, from epoch to epoch, sort of like certificate transparency, where there’s like a record of like everything that’s ever been said about anybody’s key is recorded in this kind of chain of Merkle trees, right? Am I right about that? Okay. And then like, If I understand like the big thing that we’re putting on users that’s kind of untenable is, this only works if you go back through every epoch that’s ever been published and made sure that the server didn’t like briefly say that your key was the NSA’s key for like, you know, a while and then put it back to your key.
Kevin: Exactly.
Thomas: Okay? I’m not wrong about that.
Jasleen: You are not wrong about that.
Deirdre: So was that the sort of dealing with the, you need to download the whole history, was that addressed in SEEMless or in Parakeet?
Jasleen: SEEMless first addressed this problem and basically they found this clever solution, which basically said, why don’t we make this data structure so that you can only add things to it? Uh, which is something that you may have seen if you’ve looked at this certificate transparency literature, you may have seen this, except in certificate transparency,
you know, there there’s a little bit of lack of structuring in that you just append to the tree without deciding where in the tree it goes. So you can’t exactly say, ‘this is the only set of changes that were made’ without looking at every single thing that showed up. I mean, that’s, that’s the case, right?
So in certificate transparency, essentially you have to scan the entire tree to know what changes were made to your key. Even if you’re only looking at the most latest, the most recent version of the tree, and someone is checking that only updates are being published and nothing from their records is getting deleted.
And SEEMless basically said, what if we have this cute, clever notion where you basically say, for every account, for every key version, let’s add in something, add in a new entry in the tree. And basically now as a user, what I have to check is what version numbers exist for me. Uh, and since nothing is ever deleted, I can make sure that, anything that was added for me, or on my behalf, will always remain in the log. But there’s some efficient ways to do this, basically.
Deirdre: Cool.
Thomas: So this is like, version three of my key will never change. If version three of my key changes something terrible happened,
Jasleen: exactly.
Thomas: But my key changes all the time, right. But then I go to like, version four. It’s sort of like single static assignment in a way. Like where we’re taking like a variable that, that might change, but then we’re just, I’m, I’m, I’m impressed that I was able to get that right too.
So, okay. But that, that makes, I, I see how if you, if you do something like that, you can come up with like a more space efficient way of doing append-only that way, right? Because you don’t have to track changes for things anymore. You just have to, yeah. Okay. That makes perfect sense.
Jasleen: Yep.
Kevin: Another way to think locus is in certificate transparency, the Merkle tree structure, so this, we’re all just talking about Merkle trees here, but there’s like two kinds of Merkle trees. There’s like these dense Merkle trees, or some people call it like, chronological Merkle trees, where essentially when you want to add new element to the tree, you just do this rightmost thing. You just keep adding to the right.
And then there’s, it’s, and then there’s this notion of sparse Merkle trees, or like ordered leaves, I guess, or
Jasleen: Lexicographic.
Kevin: Yeah. Also like the Merkle-Patricia tries, I think, which is what Ethereum gave them as the, as the name, where essentially the, the position of the tree is determined by something.
It’s not just always adding to the right most node, and that’s the kind of structure that we’re using when we’re talking about SEEMless and Parakeet. We’re using the sparse Merkle tree variant.
Deirdre: Cool. Okay. And then, tell me a little bit about Parakeet, which seems to be the thing that you basically tried to implement. And tell me a little bit about compaction, because I didn’t quite fully parse that into my brain about how you’re able to achieve compaction to make it even smaller.
Jasleen: So, actually since you used the phrase, uh, what you tried to implement, to be clear, the first thing ever we tried to implement was just to re-implement SEEMless to see if get it in Rust and be really efficient. And when we did that, we realized that there were several things that— I mean, overall it was a nice implementation and there’s a lot of optimization with regards to, if you’re storing state and the state changes and stuff, how can you compress that to basically keep a notion of like, even if after a while I check my key, I can make sure, when did the server add version number five for me?
Uh, as opposed to, just version number five was added at some point. So there was a lot of optimization, but it still just wouldn’t cut it for something at the scale that we were looking at. So the paper, experiments, for example, are at the scale of tens of millions of users, which is nice, but you know, it’s,
Deirdre: That’s so cute! Tens of millions of users, how cute!
Jasleen: Exactly. It’s, it’s a little bit quaint. Uh, so, so anyway, we, um, we basically, when we first tried to implement this, we realized it wasn’t going to scale. Uh, and so we basically started modifying various components, measuring how well it was doing, realizing where the bottlenecks were and optimizing.
So, so that’s the motivation for this. And, you know, by the way, we’ll talk about this later, that this also leads to still a lot of open problems that, that can be solved and which is why this is an interesting area to work in. But basically, maybe I can, I can give one example.
So if you wanna do, say, a really large scale deployment of a server side protocol, you need some amount of robust infrastructure that can respond to client requests.
So like, you can’t just have one server crashing and, and everything falls apart, right? so now, same as it had this data structure. Fairly large and the more users you add, the larger it gets. You can’t download the whole data structure into one machine or one server, even if it’s very large server.
You don’t wanna do this, and you wanna be able to respond to requests, so one of the things that we did in Parakeet that we solved, just basically how can you code up the data structures and design them so that you can basically download only partial components of these, large Merkle tree data structures, and still operate with them and respond to queries and do updates and stuff,
Deirdre: Interesting.
Jasleen: and basically have separate storage layer that’s, you know, nice backup of everything so that even if all your servers fail, that are serving clients, you can still run them back up and spin them back up, without having problems with, oh, we have problems with consistency with our new servers now, or something like that.
So this is one problem, for example, that Parakeet addresses. Now I mentioned that SEEMless data structure basically maintained a history of everything ever, because, and the reason for this is, it, it makes sense what they are trying to do, is basically a very ideal case scenario, which is, suppose I go offline for 10 years and I come back and I open my cell phone that is 10 years old.
I should still be able to check what happened to my keys, back in the day, or in this whole time. But I mean, or, or, you know, say, say like, if you wanted to support an arbitrary amount of downtime on the client, you’d have to store everything forever, right?
Kevin: Say like six months, not 10 years,
Jasleen: Yeah. Even,
Kevin: six months of being offline, for
Jasleen: yeah.
Something, something like that. Yeah. So, I mean, honestly, I think six months is, you know, something like that is still, you know, just in practice in our implementation, is like, very well supported. But if you wanna support something arbitrary, you have no choice but to store everything forever. Uh, now, If you don’t wanna store everything forever, CONIKS didn’t necessarily wanna store everything forever, and so everything was constantly mutating. In that case, you basically are saying essentially anything can be modified or deleted at any time, right?
Is there a middle ground between these two? Can you say, oh, what if we give users some grace period, and say that, you know, after a certain amount of time, certain data is considered expired, and especially if it’s not going to be used anymore. Uh, and maybe we can delete it, but let us make sure we give you a grace period to check that everything we did was correct and everything that’s marked for deletion is actually valid to be deleted and actually stale and we’re not deleting valid values.
and that’s what this secure compaction idea in the paper is, is basically saying, how can you find a middle ground between, completely append-only data structures that are always growing, which also, by the way, is, is very hard to convince, a big company to let you implement data structures that are always growing. Forever. With no recourse, and always mutating things.
Thomas: Is this the middle ground between, having to check every epoch as the user, versus, you know, only having to know where the versions are and then check the versions up? Um, so this, this is not like a middle ground between how much of the tree I have to check: it’s a Merkle tree. I’m just checking like, the path proof that I’m in the tree at that version or whatever, right? But like, okay. So like the middle ground we’re talking about here is, the untenable thing is, every user has to go download, you know, hundreds and hundreds of megabytes of state and like check all that state, versus getting like, just an index of where any version of that that ever appeared.
But the flip side of that is, if you’re doing the versioning thing, then you have to keep every version that’s ever been there. You can’t get rid of them. In the middle ground piece that we’re talking about where you have secure compaction, do I lose security here when you guys lose stuff? When you guys finally get rid of old information, like am I like, secure within the grace period, then outside the grace period, I’m like on my own?
Jasleen: It’s, also middle ground between those, is what I would say. Suppose that you’re a user who’s, you know, fairly regularly updating your keys and your key’s at version number 10 .And your first key, which is when you first joined the system, was say five years ago, and that five years ago key is never actually going to be used anymore, right? And, h opefully you’ve checked by now that that five years ago key and all the intermediate values were updated correctly. So if the server deletes this version, number one for you, it’s still okay, probably. So if the server were to delete this, as long as you check that the only things the server is deleting for you are versions that are way obsolete and old enough, It should be fine, right?
Thomas: get, I get it right. So I’m trying, the whole point of this is I’m, I’m trying to spot malicious keys that the server put in there, right? So what we’re saying is like, we can cryptographically prove that, whatever thing you’re deleting, it’s really old, and I’ve been watching this whole time, right?
So like, unless I haven’t looked once in 10 years or whatever, right? Like, I would’ve spotted the malicious key before. And all we’re saying is like, there’s some rolling cutoff time past, which like, you know, we assume you watched and there wasn’t another key there, that was added. Got it. Okay.
Kevin: I just wanna clarify also, it’s not like, these things will automatically be deleted in the data structure and we don’t have a choice. But this is more like if the server chooses to have some sort of like cutoff period of like, these answers are too old. Or let’s say in a more practical example, say like a user deletes their account, and doesn’t want to have any WhatsApp data associated with them, then how do we actually delete, their existence in this tree, to like honor the right to be forgotten, I guess. So like it’s really about the ability for data structure to support deleting while still having this append-only guarantee.
Thomas: That’s neat.
Deirdre: Good. Now, tell me about the audit proofs that this also supports. Are the audit proofs, proofs that your transitions, your updates to the, the directory, the, the A K D that you call it, S does that help with the being able to delete things that are old or have expired or you explicitly told the directory to delete, so that you can say, yeah, we may have deleted stuff, but like this was a valid transition from a previous state that you can’t see anymore to the current state, or a series of states, and I’m very curious about how one audits the cur directory, because, I don’t know, maybe you gave us some software, maybe some, a whole bunch of third parties will audit the WhatsApp key directory, the key transparency log. If an auditor is checking, the audit proofs, does that auditor have a very large and ever growing data structure too?
How does that work?
Jasleen: Kevin, do you want me to take this?
Kevin: Yeah, go ahead.
Jasleen: so maybe let’s step back a bit and, uh, forget about the, the compaction deletion steps, in the, uh, for now, and let’s think about, and this is maybe even stepping back to the SEEMless work, where basically the idea is, if you have an append-only data structure, who is checking that it’s append-only? Because the clients are only checking for their own key. So how do you basically save them from having to do all this extra work of checking the data structure? So, the only thing that you’re checking is that the only changes in the tree are insertions. And so that’s what the auditors are checking in this like plain model.
And one thing to note actually is that they don’t want, actually, these auditors don’t, since both in SEEMless and in Parakeet, they can be anybody. We don’t want them to get actual user information at all. So they don’t get anything about like, how are the version numbers incrementing for a particular user? But they’re only checking that, this tree is only growing through insertion and no other changes are being made in this like basic version. And then when it comes to deletion, they basically check that only valid values, meaning values that are older than some cutoff point, are marked for deletion. And then when they’re ready to delete after some grace period, that only marked values are deleted.
So that’s what the auditors are doing. Um, And then, you know, with respect to like a particular users changes, such as, oh, the key that was marked for deletion for me as a user is actually valid for deletion and not key like, say I don’t change my key that frequently, then my most recent key should not be deleted, right? That, all still falls on the user, but it’s still relatively little work basically for the user.
Deirdre: Okay. on the client, the user agent or
Jasleen: Yes, exactly.
Deirdre: Cool. All
Kevin: And, and also, sorry, just to clarify, like it’s not the case that the auditors themselves have to maintain their own data structure and keep updating that data structure as we go along with each new epoch. It’s more just that each audit proof is like a self-contained thing; if you wanna check from EPOCH 700 to Epoch number one, was it just a set of a appends that were made, and here’s the proof.
Deirdre: Ah,
Jasleen: and as long as you know what the hashes are of the tree in the beginning and at the end, basically you can just check.
Deirdre: All right, cool. Cool, cool, cool. So you can basically just pick and choose your ranges of epochs, and then just like you, if you can verify your proof and you don’t have to just keep, pulling and keep getting audit proofs and keep checking it. I mean, you can, but you can also discard them because you can just select your range.
That’s nice.
Jasleen: So the assumption actually, the like, really nice cryptographic thing is that you only need one honest auditor per epoch. So we could be like a group of four friends here in this call, and basically be, round Robining who’s checking every month.
Uh, and as long as we trust everybody on, on this podcast episode, uh, we should be fine.
Deirdre: Cool.
Jasleen: so that means that you don’t actually even have to check everything. You can, as you said, pick and choose what you check, and as long as there’s one honest auditor, checking every single epoch that’s basically just enough.
Deirdre: That’s really nice. I read your blog post, which is what flagged this whole effort to me. and it says all third party auditor, for the WhatsApp-maintained, key directory service or whatever, is auditing built in or going to be built into WhatsApp clients, or is this like a yet another third party, not controlled by WhatsApp engineering, I guess?
Kevin: Th this would be yet another third party
not controlled by WhatsApp engineering. That’s the hope, at least, you know, like what we wanna do is essentially publish these audit proofs to the public so that anyone can go download these audit proofs and verify that we are doing what we’re say, not like maliciously mutating or just only doing these append only, modifications to the log.
Um, so they’re not something that would actually be exposed to clients. There’s a lot of open questions actually about like how to set up that kind of like, um, like public auditing proofs and how we can actually tie it back to clients, because as you might guess, like this is not the ideal scenario where like if you’re just a regular user of WhatsApp, you kind of have to trust that, auditing was done. Um, and like there could be some ways to improve this and that’s definitely like, something that we’re looking forward to seeing future work, um, improve that.
Thomas: So like the most obvious comparison here is, again, to certificate transparency, right? And in the CT logs, mis issuance is rare. Like it doesn’t happen a lot, and it’s a big deal when it does happen. But it, it can happen, right? It’s not like, is like the world hasn’t ended. If it happens, it’s just like, maybe a CA has ended if it happens.
But my, the impression I have of this system is, you know, mis issuance of a WhatsApp key, can never happen, right? Like the premise of this system is like, if there’s an exception in one of these audit reports, it seems like the world has ended. Like, it’d be hard to explain, right?
Like I, I, I, my assumption here is that like, there is a continuous feed of audit proofs that like the ledger is going the way the ledger is supposed to go, right? But it’s not like you’re monitoring them and keeping track of what’s being issued there. You’re just making sure that the whole log is consistent.
Kevin: exactly.
Thomas: it’s very boring and there’s very little look at it unless, you guys decide to like— you guys would just take the service down before you’d publish an exception is what I assume.
Kevin: Yeah. Yeah. That’s the hope. And, and then I guess in that case then people would see the fact that we are not publishing any more audit proofs as like, a signal that something’s gonna, is happening if we had that point of leverage. But also, yeah, just to echo your point, these audit proofs necessarily actually need to be super boring because, unlike transparency, they can’t actually contain like, Individual users’ phone numbers and their public keys, cuz that’s just like a horrible privacy nightmare.
These audit proofs are meant to actually just reveal nothing about the user’s phone numbers and what’s being added, when they’re being added, except for the fact that they’re being included in the next epoch. So they’re really just meant to be like, these, like blind values that auditors are just verifying, are being added to the tree without knowing what the values actually are.
Deirdre: Mm-hmm. Cool. I could see a bot just pulling audit proofs and just verifying them and be like, it, almost like a warrant canary, but just sort of like: yep, day… 500,000 of WhatsApp, key directory, just verifying, you know, the, everything verifies. and I think
Kevin: As long as that bought is honest. Yeah.
Deirdre: yeah, yeah,
Jasleen: I mean, you could even put it on, uh, I mean if, if you want it, you could even put it in a smart contract, right? And have someone feed the smart contract.
Deirdre: There you go. Awesome.
Okay. So, you basically started with SEEMless and tried to make SEEMless work at WhatsApp scale, and so you got compaction working, you’ve got these audit proofs working to check that, you know, you did, you’re doing compaction because it was too small, too, too much data, ETOOMUCHDATA, at WhatsApp scale.
Are there any other, well, there’s probably several, but there, were there any other, uh, WhatsApp scale issues that you ran into trying to just make this work? I was reading the little description of like, we have a queue. We had to make it epochs because we can’t have an epoch per key change or key add, like we had to batch them. Like stuff like that.
Kevin: Right, right, right. Yeah. So first, just to clarify, um, this, me and I have an obvious with all like SEEMless and Parakeet in the implementation, but Parakeet describes compaction, but it’s not actually what is included in the implementation. We consider that as being like a research idea that like, you know, there, there are various trade-offs to like adding compaction and for our implementation that we have, like the open source one, we don’t include compaction as a feature.
Deirdre: Oh, okay. Cool. All
Kevin: maybe in the future we will, but at the time we wrote it, we were like, eh, this is kind of like a more of a, research idea that we’ll keep, but not yet ready for production.
Deirdre: All
right, cool.
Jasleen: I will mention that, you know, we did some back of the envelope calculations and stuff to see how, reasonable it was to expect you could actually store this data for an extended period of time. And, you know, we have a fairly large buffer to, until we even need to think about implementing this, probably.
Deirdre: All right. Cool. Nice.
Kevin: Yeah. And then to go back to, I guess the various challenges; so yeah, one in which I think Jasleen touched upon a little bit, but I’ll expand on, is the fact that basically When we were setting out to implement key transparency, to deploy something like key transparency, we were looking at existing papers, existing implementations, and one thing that was really difficult was that it seems like all of the existing papers at the time would just like have benchmarks, but the benchmarks would be like, say for like 10 million nodes and like it would just be one computer, uh, pulling all those 10 million nodes into memory and then like mutating and doing adds and experiments.
Whereas like in a real system, especially when you have more than 10 million nodes, you need to store those nodes somewhere and it has to persist. And then this actually really affects like the benchmarks because now the dominating time is not how long it takes to do your hashing, the dominating time’s, how long it takes to pull from storage,
Deirdre: Yep. And write down to storage
Kevin: Exactly, and like for each layer of the tree. so that’s one of the challenges that we have to address, which we kind of are hoping as like, By open sourcing this implementation, we’re hoping like, hey, here’s one way in which we’ve addressed this. Here’s a storage layer. And, and so that, that’s one of the things that we’re hoping people can use, you know, if there are other encrypted messaging apps, looking at key transparency, like we have like a trait for the storage layer so that you can choose whatever
Deirdre: yay.
Kevin: Like, there’s that aspect. And then also even with that storage layer, now we were dealing with this issue of like, okay, imagine you do like a tree traversal and you start at the root. It’s like, okay: first you fetch the root from storage, and oh, had two children, oh, you, you fetch the next layer, and like, these trees were like depth 32 or something, so, that ended up being like a super super, slow operation for any kinds of, like, tree traversals when we, first were experimenting with this. So we built in a bunch of like, optimizations for batching and essentially like pulling in large components of the tree all at once, so that we could just do all these mutations without having to keep like writing and reading back from storage.
And that’s like another little like hack I guess, that we have on our implementation, which ended up being really critical for the, the deployment we have in WhatsApp.
And then also you, you mentioned I guess the queue aspect of things. That’s something that is kind of outside of the realm of the open source implementation.
but I will say that like, for any kind of infrastructure where essentially what we’re saying is, hey, people are, uploading their public keys and we want to have some public trace of them that people can verify, if there are any issues with our infra, then they immediately reflect to the public.
And so we had to be really, really careful with like, you know, the reliability of this. Like we’re not losing any keys. Cuz if we lose a key, then it looks like now we’re malicious, because like we purposely lost the key of the post, like accidentally losing the key.
Deirdre: Do you, do you do anything when you, on that upload from a WhatsApp client that is issuing a new key that’s saying, here’s my new one, whatever. I either just did a full on rotate or I am logging in on a new device as phone number, phone number, do you do any like checking of, I mean, I guess it’s hard to do if it’s like a new device to be like, cool, like.
This connection is authenticated in some way. I trust you. It’s not just a random phone and a random phone number, uploading a new key, to WhatsApp, I guess, like this is, this is part of the whole, I guess I would love to do a daisy chain between an old key and a new key and some, if it’s the same device, it, that’s easy to do.
But if it’s a new device, I mean, you can do this, uh, on some cr encrypted messages that are like, you’re porting your device or you’re, logging in a new device by, uh, by doing like this little, you know, handshake or something like that between devices to sort of do a chain of trust, uh, between, uh, a, a thing of devices, but also people lose their devices.
people lose them. So I, I guess, The answer is probably you can’t really do that? When someone is saying, I am phone number, you know, whatever, and here is a new key, either my single new key or a new key for a new device, an additional device, you’re just sort of like, cool, thanks.
Like you just keep a, you know, confidential authenticated channel up to WhatsApp from a WhatsApp client. But that’s kind of all you can do? Is that
Kevin: Yep. That’s, that’s pretty much the case. And so I think what you’re touching on is a really interesting point, which is, uh, like, you know, can we have these like signature chains? Like your, your old public key signs your new public key, and if it’s, if it’s not able to sign, then it’s not a valid public key, kind of thing.
And like, that would be great. But you also mentioned the exact downside of that, which is what happens when you lose all your devices and you wanna be like, hey, this is me again. and at that point, either you have to be like, okay, forget the whole signature chain thing. This is actually like, trust this, please.
But then that also seemingly inevitably allows the server to just do the same thing if they wanted to launch an attack. So, we don’t do that, kind of because of that reasoning. Uh, but it would be interesting, you know, like, um, is there a way to kind of get around this? Because I totally agree, like right now, yeah. So a lot of this is focused on the transparency of it, not necessarily like is this the correct key? We’re more about like, okay, you say this is the correct key. Now we’re gonna show everyone and everyone can verify that. But having more, measures to say like, this is the correct key.
This came from the correct user, that would be great, but not something that we tackle.
Deirdre: Yeah.
Jasleen: is, yeah. I think this is also related to a deeper interesting question for like cryptographers slash applied cryptographers, um, which is, how do you identify people in the, on the internet? Uh, Because any sort of authentication channel will have drawbacks the way that they stand now, right. until we can get to a very serious biometric type of authentication, maybe, you basically always have some sort of drawbacks.
Deirdre: Mm-hmm. And what if I lose my fingerprints? What if I lose, what if my, I can’t rotate my retina scan. So,
um,
Jasleen: And, and yeah. You know, like when people age, for example, you, you might, change, the way you look might change or,
Deirdre: exactly.
Jasleen: yeah.
Thomas: So I, I, I guess I have like some simple questions about what the actual cryptography looks like. My first one being, are there any polynomials involved in this?
Kevin: No, they’re, they’re not. So that’s a, so it’s actually funny because I feel like, um, maybe in the future there could be, I mean, there are still some unsolved problems in this space. If by polynomials you mean like polynomial commitments? Um, yeah. So that would be an interesting direction to explore, I would say.
But no, at the moment, this, this implementation is just, the primitives we use are hash functions. We use Blake3, I think, or you can switch it out with whatever. And we also use VRFs, uh, which stands for verifiable random functions. And this is something that gives us, like this whole like privacy thing where we don’t want to actually reveal the people, what the phone numbers are, we, we show them instead of like the VRF of a phone number. Um, and this is meant to be like a pseudo random output that you can verify that is associated with the phone number if you’re a user. But if you’re not that user and you don’t have the phone number, you wouldn’t be able to verify.
Deirdre: Spoiler: there is a polynomial under the hood of that V R F, Thomas: it is, an elliptic curve. Ha ha ha.
Thomas: That that doesn’t count.
Deirdre: Okay, fine.
Thomas: As long as it’s of low degree, I’m fine.
Deirdre: you go. Excellent. Okay. that’s really nice, like, I, I know you already said that basically your constraints were not like computational and not even memory, it was literally, we need to be able to write this down to disk and read from disk because we need persistence and we need, you know, basically some sort of distributed storage, uh, at, at some point.
And so you’re like, you know, screw, like, we don’t even need to care about elliptic curve this, or fancy crypto computation that because we have to write down to disk and that’s going to eat our, our performance benchmarks anyway. But if you did care, you, you’ve got mostly hash functions and a little bit of ed25519 VRF happening, and those are pretty efficient.
So that’s nice for scale. Cool. Okay. So what’s, next? I know that this is not fully rolled out yet, and you’ve already mentioned that compaction is not deployed. It’s an idea you would like to look at going forward. Um, what other things are you interested in trying to roll out, with key transparency or at least research directions that you’re looking to roll out with this one?
Kevin: There’s a lot. So I guess first, um, yeah, so the blog post that we put out was basically an announcement that we are doing this. And so one of the problems that we have to face is like, when we do want to eventually say key transparency is rolled out, we need to have like indexed everyone’s keys, and for like billions of users also, this is like billions of users, but also every key update they’ve ever done. So this tree just keeps going, and so like there’s kind of a process of like loading all that into our existing structure that we have it ready
do you already have that information somewhere? Do you have like the logs of every key that has ever been
no. So that’s, that’s not something that we’ve recorded in the past. It’s, it’s more like with this tree, we’re going to start recording
Deirdre: Start. Okay. So you’re, you have to just sort of start at day zero or whatever of the key transparency directory. And then from then on you will have, you know, the log of every new key or you know, whatever.
Kevin: Yeah.
Thomas: I understand this right, starting on day zero, any key a WhatsApp user sees, once the, once the client enables this or whatever, is gonna have to be recorded in a log. Otherwise it’s a, you know, it’s an exception. Right.
Kevin: Yeah, exactly. Exactly. So that’s just like kind of one of the challenges that we’re currently still working on before we can say like, we’re done, and, and yeah. So, and we’re also working on what the user interface of this will actually look like: what do users see when they wanna verify each other? So right, right now, like you can go scan the QR code, but we wanna show some sort of, menu that is like, hey, we have some automatic verification happening and if it fails, then this is like, okay, something with our automatic system failed, please go verify the QR code if you’re, yeah, so, so, so that we’re still working on, but we’re hoping to, to have soon.
And then in terms of just like future directions, I think for like maybe on the research side, the academic side, the repository, one kind of big challenge for us is these audit proofs. Not only the fact that we have to, publish them and figure out like, okay, who is gonna be auditing these audit proofs? But also the audit proofs are like, ginormous.
Deirdre: Oh.
Kevin: And part of this is actually because of a, of a limitation of sparse Merkle trees. So in certificate transparency, when you use a chronological Merkle tree, you’re always applying to the right.
So when you wanna do these like audit proofs or update proofs, you can basically take all the updates that happened and like that’s just one little like triangle on the right side of your tree. And you, can give a very succinct proof no matter how many updates there are, or like logarithmic in the number of updates.
Very different for a sparse Merkle tree, because now the leaves are not all just being appended to the right. They’re like in random places. Um, so your audit proof is gonna be like, at least our audit proofs are like one by one: okay. This element was added. Okay. This element was added, so, so if you add that up, it’s like Merkle inclusion proofs for each element that was added, per epoch, it’s like, megabytes. Uh, and so that is something that we would love to figure a way to like kind of reduce and you know, this could involve a completely different construction. It could just be a modification of, of how we’re doing a audit proofs today. Maybe you throw a snark at it. I don’t know.
Just learn ideas there.
Deirdre: I mean,
Jasleen: And then polynomials will be right there, waiting
Thomas: I was just gonna say it. Yeah.
Deirdre: And pairings or, well, you don’t, you don’t. We have some pairing free snarks now, but Sure.
Thomas: How, but by the way, how long is an epoch?
Kevin: Um, right now we are, I wanna say five minutes, 10 minutes.
So, so once we, five or 10 minutes and, and it ends up being about a hundred thousand new entries every or somewhere along that magnitude of, of entries. Yeah.
Deirdre: And that’s, that’s like a TikTok. So like if for example, all the devices were asleep for that five minutes in the whole WhatsApp universe, it might be like, all right, the next epoch had like one key update in it. Okay. All
Thomas: and so sort of arbitrary third parties are doing the audits here. It’s, it’s like they’re witnessing what you guys are doing, right? Is the idea?
Kevin: That is the hope I, I would say we don’t have that quite set up yet. There’s kind of like a chicken and egg problem here where like we would love to have the auditors. But then in order to have the auditors, we need to have something to audit. So we’re kind of going with, here’s the thing to audit. And then it would be great if we could build like an ecosystem of auditors, of people who are interested in.
Jasleen: Or, or even companies that are
Thomas: and then like effectively then the, the way you’re describing how the proof works, like the auditors can also essentially count the number of keys in WhatsApp.
Kevin: Yes. So, so what is leaks by these proofs, is the number or like the size of each epoch. How many new updates that were added to, um, to the tree? So this isn’t gonna be the number of keys or I guess, yeah, so, so it’s number of times a user, some user has updated their key within some period of time. Yeah.
Deirdre: Is it some user or is it, all
Kevin: you add up all the users, so, for this five minute period, add up all the people who updated their keys for WhatsApp, that number will leaked by the audit proofs, because it’s for like, besides the audit proofs is reflecting that.
Deirdre: so it could be, it could be one person trying over and over and over again to update their,
their key, or it could be the same number of people doing it once or
Kevin: right,
Thomas: was anything sensitive about that data set, it could be the provider just deliberately adding things that no one’s ever gonna object, cause it just wants the number to be weird. So, CONIKS in the same kind of setting, right, CONIKS leaks potentially the number of key changes of like your sibling node.
Jasleen: Yes.
Thomas: Yeah. You guys call, you guys call that like a tracing attack and I’m, I, I th I think I have my head around it, but like, I, I can’t like pick a user at random to do that to.
Jasleen: so basically the way you would think of this as since there is one, leaf node in a Merkle tree attached to a user and that’s basically, now once you fixed it, it’s fixed wherever. So now if Deirdre blocked me and didn’t want me to know what the key changes, uh, Deirdre was making anymore, I could still say, oh, I didn’t know where Deirdre’s key was getting logged, and, uh, I, for example, share a certain subtree with Deirdre, then I know basically if, if parts of the subtree changed, I can, I can know that it was, it was possibly you that changed your key.
Thomas: That makes sense. So it’s not like the end of the world, but it’s still not something you wanna leak.
Jasleen: Yeah, and, and I mean the way that, at least I think of it as when you’re talking actual secure cryptography, you’re, you’re talking at least being able to prove exactly what you leak. And in this case, you can’t exactly prove that, and in the worst case, you leak every single change.
Thomas: Gotcha. I have like one other question about this whole system, right? Which is like, I kind of wanna put my annoying Hacker News commenter hat on, um, like, I’m trying to imagine myself talking to somebody on hacker news saying, WhatsApp is actually a really strong secure messenger, and their gonna come back and say it’s WhatsApp, it’s controlled by huge whatever.
I’m making fun of an actual valid point, but whatever. So like, you have this whole system built and now we have the system where ostensibly if I wanna know who I’m talking to and know that WhatsApp hasn’t interposed another key, um, like that specific attack, I can just go literally query and see if like that key is like the key that we expected to be talking with, right, which is great. But that, that still sort of depends on me trusting the client, obviously. And the paper ethics says that directly, like you can’t really directly solve that problem. But like, if I wanted to, like, is it possible to build systems, like, you know, alongside of the WhatsApp client to do these checks?
Like if I didn’t wanna trust the client to actually do the check, but rather periodically check on my own with like, just a piece of software. Is, is your key directory like exposed to clients?
Kevin: Yeah, that’s a really interesting idea. So, no, at the moment it’s not because we don’t want anybody to be able to look up anyone’s keys because then there might be like scraping attacks
that you could do, um, at the very least. So there is still inherently like trust in the client to be at least doing these verification checks.
But yeah, I, I, I guess also like a lot of the goal here is to really move, away from trusting the server to trusting the client. And there are certainly a lot of issues with like, you know, like what happens if you get some sort of like malicious binary that’s like a fake WhatsApp app that isn’t actually doing these key transparency checks.
But there’s also a lot of like it, it would be great to solve that in general because what happens if you get a fake WhatsApp app that isn’t doing end-to-end encryption or isn’t doing Diffie-Hellman? It would be great to, I guess, you know, have a good story around the, like binary transparency almost, or, um, some way to check that you have the real WhatsApp.
Deirdre: So when the WhatsApp client is, doing its normal thing and its peer is saying, I am, here’s a new key. Or you’re, you’re connecting to someone to the for the first time or something like that. Is the client doing a, a lookup request? Doing a lookup proof to the directory, and then all it’s doing is saying yes or no, basically.
Kevin: Yeah, exactly. So the client asks for proof from the directory that WhatsApp servers manage, and you get the proof and the client is supposed to verify these like Merkle paths.
Deirdre: Right. Okay. So in theory, a WhatsApp client could just be like, Look up, look up, look up, look up, look like it could try as many things as it as it could possibly just compute and just be like asking for a look up to say, is this included? Does this make sense? But you have to be authenticated. WhatsApp already has to know who you are before you’re able to make that request.
Kevin: Yeah, you could also in the app, just type in random phone numbers and see what you get.
Deirdre: Yeah. Okay.
Kevin: that, that kind of already is a problem that we have to deal with
Deirdre: Yeah. But you can rate limit,
and you can, yeah. Okay. All right. Cool.
Thomas, do we have anything else?
Thomas: No, my brain hurts already. This is, this is very, this is very cool stuff, right? So like you can imagine other messaging systems building something, I don’t think you guys are gonna do this, but like you can imagine running a key directory for multiple messaging services as well, which also sort of neatly solves the
Jasleen: Well, I’m unaffiliated, so I could maybe do it.
Deirdre: Yes.
Kevin: Yeah, I, I mean, I think that that would be great, right? Like that’s kind of one of the, hopes that we have with doing this, which is that first of all, like with this launch of key transparency, we’re almost certain that there’s gonna be improvements to it, right? Like this is kind of our, our first attempt, uh, what we want to provide, but there can definitely come along like better designs in the future.
And actually that would be really great, I think for not just us, but the community. And the, the second motivation is, it’d be great if this was kind of like a thing that everyone did, not just WhatsApp, because the problems that we’re trying to address are not specific to what’s up in any way. They apply to all end-to-end encrypted messaging apps.
Deirdre: And we just did an episode on messaging layer security and like that has all these same similar problems. Every client, every device, every user has a public key. They rotate. you can do stuff with the continuous key agreement, but having transparency is just completely out of scope for them and all, you know, a new, end to end messaging, like I ETF standard. If this solution at scale seems to work for anyone trying to deploy that, all the better for users, if it works, it’s kind of interesting because your code base is very small. It’s just a nice pile of Rust and it’s very nice and it’s well, well documented.
And I’m, I’m in. I enjoy that you, uh, That you will put something called a tombstone value if something, uh, has been removed. Uh, I wanna make all these jokes about tombstone pile drivers, but that’s just me. I think that’s very adorable. If it works at WhatsApp scale, it’ll probably work for basically anyone else.
But it doesn’t look like it’s hugely complex and complicated and heavy to function at WhatsApp scale, which is lovely because you, you can see all these systems that are like, well, to operate at. I don’t know, Google scale. You need stubby and you need spanner, and you need this and you need that. And it’s like, well, no, this will work at two and a half billion users and however many billions of keys rotating per day.
But it probably doesn’t need to be heavyweight and huge and complex at the same time.
Jasleen: mean, just just so we are clear, we did abstract away the storage layer entirely in the way that this code is written, which is also nice, I think, because now anybody can technically use this open source code if they wanted to. Also, actually, by the way, I would like to mention a couple of other related open questions.
So, um, my friend Julia Lynn, who you should have on here, by the way, uh, um, recently gave a talk here at my university about, uh, some work that they also presented at RWC a couple months ago or, or I guess earlier this month. Yeah, yeah. Um, where they basically talk about, um, the, the new regulation for interoperability in the eu.
And so then, key transparency in that context is one interesting research question. Another thing that I wanna mention that I think we haven’t really touched on: so we talked about certificate transparency, we talked about any sort of star transparency, but one thing we didn’t talk about is, uh, the FTX debacle recently.
Uh, and, and, uh, and actually proofs of solvency are a kind of transparency solution, which are basically for these decentralized settings, which are, where you’re saying, do the accounts of this exchange, centralized exchange have enough, uh, liquidity in order to support the users that they are claiming to support?
Uh, and in some sense it’s a very analogous problem. Right? And, uh, I think the nice thing is that if this works at WhatsApp scale, for key transparency, this very related problem might also work at, at scale for proofs of solvency, which, yes, there are more complex statements about amounts of money and so on.
And it’s not just about equality of this public key that I believe is my public key, is what you believe it is, uh,
as
Deirdre: it’s not at billions and billions.
Jasleen: Yeah. So, so I think that’s also something interesting to think about for anybody who’s interested in this type of transparency problem that not only is it good for users in this encrypted messaging setting, uh, but also there’s all kinds of dimensions where you can generalize this problem that still require a lot of exploration before they can be deployed at scale.
Deirdre: Yeah. That is very cool. If someone has a research problem and wants a research problem just ask Jasleen, she’ll give you research problems. Awesome. I’m looking forward to the updates to the WhatsApp security whitepaper with more details about, uh, as this gets deployed and rolled out, I have full confidence that the UX solutions will just, they’ll be fine.
They’re not a, in my opinion, they’re not a big deal, but I know that I am not a UX person, so they’ll, they’ll do a good job, I bet. Um,
Thomas: famously not a hard UX problem.
Deirdre: You know? Yeah. If you already have the compare your QR codes flow, like you just tweak it a little bit to be like, maybe something bad happened. No, really, you should compare your qr s QR codes flow. Um,
Kevin: yeah.
Jasleen: Are, are you sure you’re not a UI person?
Deirdre: once upon a time I wrote a bunch of JavaScript. I do not do that anymore. Thomas, anything else?
Thomas: No, this has been awesome. Thank you guys so much for doing
Deirdre: Kevin, Jasleen, thank you very much for coming on.
Kevin: Thank you for having us. This is
Deirdre: Awesome.
Jasleen: Thank you so much for having us.
Deirdre: Totally.
David: Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek and David Adrian. Our editor is Netty Smith. You can find the podcast on Twitter @scwpod, and the hosts on Twitter @durumcrustulum, @tqbf, and @davidcadrian. You can buy merchandise at merch.securitycryptographywhatever.com Thank you for listening.