Hertzbleed

Hertzbleed

Side channels! Frequency scaling! Key encapsulation, oh my! We’re talking about the new Hertzbleed paper, but also cryptography conferences, ‘passkeys’, and end-to-end encrypting yer twitter.com DMs.


This rough transcript has not been edited and may have errors.

Deirdre: Are you starting or am I starting?

David: Ah, that’s a great question. Welcome to security, cryptography, whatever. My name is David I’m here with Deirdre. Who’s, uh, who’s been having a rough day. We’ll put it at that. Um, we are unfortunately, uh, Thomas free this week. So you’re stuck with just the two of us, and we’re gonna do our usual game where Deirdre understands math and then I ask questions and we will be playing the Hertzbleed version of that game

Deirdre: yeah. Uh,

David: talking about some other stuff too. So

Deirdre: yes,

David: what is Hertz

Deirdre: Hertzbleed is a new, uh, side channel attack. a physical side channel attack paper against one of my favorite things. Uh, an implementation of SIKE, which is super singular, isogeny key encapsulation, um, and TLDR. If you have turbo mode activated on your processor, AKA frequency scaling, depending on how much power it’s using it can cause, uh, variation, uh, in the processing time of your crypto.

Uh, and it varies according to the value of your secret. Therefore, if you can measure this, uh, you can extract your secret and they were able to, uh, leverage this against, um, my favorite isogeny based crypto system, which is not making it to the, uh, final round of the NIST post quantum competition. But it is still, uh, you know, a everyone’s favorite redheaded stepchild of the post crypto world.

Um,

David: uh, I feel like this is like a targeted Deirdre bug because

Deirdre: a little,

David: SIKE and then it’s an architecture, uh, side channel. Um, so your partner who works in architecture, um,

Deirdre: Oh God.

David: it’s not as problem.

Deirdre: Yeah. And I literally was like, there’s a new paper. It has to do with turbo scaling. And he’s like, I already know what the attack is. Stop talking to me about it. um, and so there is already a known attack. Uh, so the way that SIKE basically works is that instead of doing this ephemeral key exchange, kind of like elliptic curve diffie Hellman, but with, instead of exchanging points on curves, you exchange curves, but also some points on those curves so that you can do the final thingy and get to a shared secret.

Uh, it’s kind of doing, um, it’s almost, it’s, it’s solving the answer before and then encrypting with it and then giving you your key. And then this is the whole KEM, this key encapsulation mechanism. So it’s

David: So they, they tweaked the API to fit into the N chem D uh, uh, format,

Deirdre: yeah. And so,

David: addressed if I, if I recall correctly addressed some of the, oh no, it’s a static key. And therefore the out of group attack supply.

Deirdre: Yes. Uh, the S I D H uh, you should not use it in a static, uh, ephemeral setting. There is a, uh, you know, active, adaptive attack that you can use against that, but SIKE turns it from, uh, Basically would allow you to do, uh, in theory, do use it for static, um, settings. So for example, if you have a long term identity key, like in the signal protocol or classic signal protocol, um, you, that would live for a long time, um, you would not want to use pure S I D H to replace that because you could just run this attack against that static key over and over again, and you would get, uh, you would be able to get the secret out.

In theory, you should have been able to do this with SIKE instead, the API would be different and not as pretty, but you could in theory, do it, this attack, uh, would, if it works, does a thing that would let you extract, the key, the secret key from your SIKE key pair, but it’s a side channel power analysis attack.

the other thing that was you know, novel about it is that you can do it remotely. So you would make queries either like over some, you know, web app request on your local network. Um

David: than like break out the multi-meter and pluck it into the processor and

Deirdre: Yeah.

David: going on.

Deirdre: And I think, I think the way that they actually leverage that is they are able, so we’ll, we’ll get into to my quibbles about this paper, uh, next, uh, they’re able to measure it in the timing. So, so kind of, I won’t get super detailed into the, the physical thing, but basically for certain processors, if you’re compute and— out of the box, they’re configured to, uh, turn on like turbo mode.

Sometimes if your compute is taking too long or taking too much power, the processor can decide to just go into like another gear and tweak with the processor cycle rate, the frequency. Um, and this attack is leveraging that change in signal, uh, to extract information about the secret and they get enough information— and they’re able to, they’re able to measure that in the timing of the computation.

So in theory, you could also measure this by like slapping a, like a multi-meter on the machine and measure the power or slapping an em, uh, radio, uh, on the wall next to the computer. But if you can measure this stuff over like network latency, if it’s large enough, and this is in the milliseconds, not microseconds, but milliseconds, um, that’s something you can measure over the network to a degree.

So that’s why that’s remote and that’s, that’s kind of scary for people and stuff like that.

David: That’s one of the reasons that they’re they’re targeting, uh, SIKE right, is because all the post quantum stuff is bigger and slower than where we’ve gotten elliptic curves to these days by virtue of constantly redesigning them for performance, and then applying the Bitcoin community at performance improvements in exchange for money.

Deirdre: yes. This is where I get into sort of like, cool. This is interesting and novel, you know, this is a cool, interesting thing. Uh, good, good job. You’re able to Mount this attack. However, your target is the slowest computation in all of the post quantum crypto systems. The one of the largest parameter sets of that crypto system that is available.

This is using the P 7 51 parameter set that is a 751 bit prime field, uh, over an extension field. So it’s not just all the, the SIDH’s and the SIKEs are not just doing a, you know, finite field arithmetic, from zero to a 751 bit number they’re doing quadratic arithmetic. So they’re doing, you know, whatever the scaling number of computations that you have to do, you’re doing, more. Um, that used to be a good middle of the road parameter set… eh, let’s say, six years ago for S I DH and SIKE since then the security analysis, cryptanalysis, uh, and, and everything else about S I D H and SIKE, um, at like the fundamental computational level, not like this kind of protocol level, like, you know, whatever vulnerability stuff, has brought the like security level up of the protocol.

And that lets us bring the parameter sets down and make the protocol faster without losing security. So they should— I asked the, oh, the co-author or one of the co-authors online. Did you try this against the parameter set 434. So that’s a 434 bit base prime field. And that parameter set is, I don’t know, like a hundred percent faster, usually just to do computation than the one that they targeted.

Um, in theory being, this would go fast enough that this turbo mode that’s on by default on these processors would not kick in as quickly. And you would not be able to get as good of a signal out of the target that you’re trying to attack as you do for the 7 51 that they did in their paper. They said, they’re gonna go try it.

They haven’t done it yet. They haven’t done it for elliptic curves yet just pure classical elliptic curves yet; I look forward to their results.

David: Well for, for elliptic curves, it’s still, I think, seems to be an open question as to like, if it’s even possible because of the speed, but, um, the turbo scaling. Do you, uh, how does that, when does that kick in and why does making something faster or shorter, make

Deirdre: oh, so the idea being, um, like how would you mitigate this? Basically, if your compute is faster and more efficient, it’s less likely that your processor is going to think, uh, or learn or use a heuristic, or whatever, like, oh shit, you’re using a ton of power. You’re taking a long time. I gotta like tweak my clock cycles and, you know, change how I’m allocating my compute in order to like, switch it over here or, you know, reallocate it from this process over there or whatever it’s doing.

If you’re faster and more efficient, it’s less likely to get into this side channel eliciting behavior. That’s the theory.

David: Versus the usual state of a processor, which is just blocked on IO or blocked on memory, um, or both, uh, blocked on JavaScript.

Deirdre: Oh God damn JavaScript. Unless it’s, unless you’re doing this CR like God, if people are doing 7 50, 1 bit SIKE in JavaScript, I’m not gonna give anyone ideas. Someone’s probably done it already. Uh, yeah. It’s interesting. It’s interesting. But I would like to see them try to Mount it against a less vulnerable target.

How about that? Hmm, love that side

David: I’ve been thinking about, you know, Spectre a little bit recently, um, because, in my day job at Chrome,

which I’m not speaking on behalf of, um, you know, Site Isolation feature was a big deal. Um, and the basic idea behind Site Isolation is when Chrome launched. It was one process per tab, and everyone thought that was cool because websites and browsers used to crash all the time.

And now just the tab would crash and not the whole browser. And you would get the sad tab, uh, little face. And that was cool and that had security benefits as well. Um, and that, you know, the stuff in one tab couldn’t leak into another. but the fact of the matter remained that like, if you somehow got code execution in a tab in the renderer process, you could then just be like, let me load up Facebook.

And then suddenly Facebook and all of its data is in that tab. And, uh, you can get data out that way. And so Site Isolation was developed as an idea of like, you know, let’s actually do one process per site rather than one process per tab to mitigate some of those things. And it had the side effect of— Spectre gave you basically, if you have like JavaScript access because of the timing side channels you have, um, can read any other memory in the process without a bug simply, uh, by virtue of controlling the JavaScript, even if like by normal course of execution, Like that data was guarded by like, if statements and so on, like you aren’t supposed to be able to read it.

Um, and so you could leak data out. And so that kind of like accelerated some of the deployment of Site Isolation, but without doing a whole history of that, of which many other people are more qualified to discuss, like the big fear with Spectre in terms of the web was like that you would take one of these side channel based exploits that gives you some form of memory read and then like, uh, weaponize and, or, you know, like one click the exploit so that it works in more situations.

And so at the time, like there’s a chance that like this could have been really existential, uh, to the web and then, you know, Site Isolation more or less mitigated it, uh, that took some time. But also on the flip side, Um, like that just didn’t happen. Like it turned like, I don’t know of anybody that used Spectre to actually exploit anything.

It’s not like something in your standard Metasploit toolkit and so

on. You don’t see like N-day Spectre exploits

Deirdre: it’s like so targeted. Uh, it’s like hard to have a generic exploit using that. I think.

David: And so, as a result, like, uh, not to say, like, it was, it was overblown or anything, but just like it didn’t, um, it could have been much, much worse

and you know, every time you see one of these, there’s like the spectrum of like, you. This is cool, but then I see like, oh, reading out secrets then like the real trick for these is the closer you can get to just like arbitrary memory read, um, the like higher and higher and higher the impact is.

Um, and if you’re like writing or finding these exploits, the like goal for the mega exploit is more or less arbitrary memory read.

Deirdre: yeah, but like Spectre Meltdown had a lot of vendor coordination to deploy stuff to like retpolines and all that stuff before was disclosed in 2017. Right.

David: Yeah. Some of the retpolines are gone. And now, like for example, V8 and Chrome, the JavaScript compiler had some of those types of mitigations in its code generation that have since been removed, um, because of Site Isolation.

Deirdre: Awesome.

David: now, like what your operating system chooses to do about this, you know, that’s a different question.

That’s like Spectre in the web. Um, versus, uh,

Deirdre: the

David: you know, Spectre and your hypervisor, uh, which is like the same problem, different shape.

Um, I’m not sure what they’re doing there.

Deirdre: yeah. I wonder I, I would ask this to Thomas, but what do you think, do you think, like enough mitigations getting deployed at announcement time kind of blunted the, the drive to try to exploit this? Or is it just too hard for a ‘sploit kit to leverage.

David: I don’t know. I suspect the, uh, the mitigation stopped some of the like HeartBleed-esque exploitation in the sense of just like, let’s see, who can read what type of things that happened after HeartBleed happened. Um, but I, I don’t know, uh, I mean, you know, uh, hindsight is 2020. Um, I don’t, I don’t know how much we were. Um, I, I think in retrospect, and I’ve heard even other people say that like Site Isolation wasn’t worth it relative to Spectre. Well, you know, in my personal opinion Site Isolation worth it just for the UXSS improvements.

Deirdre: Yes.

David: but, um, and Site Isolation’s definitely worth it, you know? Um, right. Like had you been wrong?

Had we been wrong about, uh, Spectre, had it been mass exploited like that would’ve been really bad, um,

an extinction level event, if you will

Deirdre: Yeah, Jesus Christ. Um, that was another vague annoyance. When I, when I saw the, the name and branding of this, uh, exploit this paper as Hertz bleed. And I was like, what? It’s not related to HeartBleed. HeartBleed was like memory safety, right. It was a, it was a bug, but also memory safety that resulted in HeartBleed.

This

is

David: were memory reads

Deirdre: there were memory reads, but like in a very different way. And so, yeah, that’s the bleed of like, you’re leaking some information that you’re not supposed to, but like in a very different manner in, and it very, and it was like HeartBleed was just like, you ask it to do something. And it’s like, here, I’m gonna

read these

David: I would— hi, I’d like 65 K of memory, please

Deirdre: yeah. like, and it’s like, here you go. Here’s the private key of this RSA cert, or whatever the fuck. Um, but, uh, yeah, I. God like HeartBleed was my running theory is that there’s enough just dumb bugs in memory, unsafe code in the world that the exploit kits, and even a lot of adversaries that are trying to exploit you don’t need to turn to attacks like this.

Like there are probably people out, out there in the world who are like, I’m keeping this proof of concept in my back pocket just for funsies, but I think there’s enough low hanging fruit that they can get a lot of bang for their buck for, without turning to micro architectural attacks like Spectre and meltdown for, uh, side channel attacks that leverage physical things like Hertz bleed or, or any of these other things.

They’re interesting whiz bang and cool proofs of concept. But I. People who work with those sorts of people could probably tell me otherwise, but I have a feeling that it’s like a, you know, return on investment sort of thing. There’s plenty of other software bugs that they could leverage instead of these

David: mean, I think that’s even broadly true of software bugs. Um, my, my role on this podcast is to basically recapitulate other people’s opinions that I’ve talked to recently.

Um, and so I’ll say I was talking to, uh, uh, someone else recently and he was like, look, my priorities are like single sign on, malware, bunch of other stuff.

Like zero days in memory, safety, bugs, and use after free’s in that I’m using. Right. Um, and then, you know, you probably have another giant gap and then like hard, hard to exploit side channels. And if you’re, if you’re a platform, um, right. These matter a lot more, um, if you’re a cloud platform or an operating system, right, you need, you need to be thinking about these things cause you’re responsible for like everybody.

Um, but

uh, if you are simply, you know, using elliptic curves or, or running a, a, a TLS, you probably, um, have other things to worry about.

Deirdre: yeah. Um, I think Mike Hamburg was talking about this on a mailing list and he basically, he works on stuff, I think crypto co-processor or something like that. And so he, he both has a, a vested interest, but also domain expertise and basically being like. If you can, and you care, just have a crypto processing unit, like a, you know, trusted enclave sort of dealie, or, you know, a crypto chip in your, you know, your iPhone or your server cloud server rack.

If that’s what you do. And it’s just very well constrained, very well specified, you know, performance characteristics. It does not do variable scaling. It does not do you know, all these like side channels that you really, really care about. It does not do Spectre. It does not do speculative execution. It doesn’t do all this stuff.

And then you can just trust it to either execute a very narrow set of cryptographic operations. Or you can just trust it to compute over secret data. And it’s not gonna leak its ass out and then you’re done ish,

or at

David: a, that’s a great way for, for doing the actual crypto operations. I would caveat it a little bit with like, um, like last year I was doing some work with like the key chain and secure enclave APIs on iOS, um, for an app, um, in the course of my day job, you know, we had some keys in there and they were elliptic curves and we were signing stuff.

And let me tell you those APIs are like, the device might be very well specified the software APIs for interacting with it are not. Um, and, uh, they’re also like in my experience, haven’t been particularly reliable and I’ve heard this from other people where just like you, when you make a key chain call and sometimes it just doesn’t work.

Um, it doesn’t necessarily return to an error code. It just like doesn’t work. You get like a 4 0 4, you don’t have a key or something, even though you do, or you have like. Multiple seconds of, uh, of delay, like asking, you know, what are the root certificates, like key chain, that type of stuff. Um,

Deirdre: like. Just doesn’t return

David: you know, you just have to try again or like you know, um,

so

Deirdre: fun.

David: all that to not to like specifically knock on apple or anything, but just to say that, like the there’s two aspects of that there’s well, specifying the hardwares that it doesn’t.

So does these crypto things well in hardware, which I think we’re pretty good at you don’t see that many, like attacks against the hardware and Yubikeys and so on. Um, but what you do see is like, oh, this HSM has an API that like, lets you do something dumb or like is hard to use or. I

Deirdre: HSMs someone needs to disrupt that market. They suck. Their APIs suck. I’m gonna blame FIPs for a lot of that trouble. Like, it’s it, uh, like there’s, there’s iOS APIs in one bucket and HSMs and what they, how they behave and their APIs and using them in a whole other bucket on the other side of the ocean, uh, they can be so much better.

David: think the, the market for like, you know, secure enclaves and phones and so on is clearly very large. However, I would posit that the market for actual like HSMs that you would put in a rack is roughly what you would call a zero billion dollar market.

Deirdre: yeah.

David: so you might not, you might be waiting a while on that

Deirdre: There’s like two companies. And like the people who need them are like, they need it to be FIPS certified. They need this, they need that, whatever. Like this was part of the reason that, uh, Z cash, uh, shielded transaction adoption was so difficult is that the places that we wanted to deploy these things are like, cool.

Uh, we do all this things to be like conformant with these regulations of storing stuff. Like give me an HSM that supports your weirdo bespoke elliptic curve so that I can do elliptic curve signatures for these shielded transactions. Like let alone the proofs. And we’re like, ah, here’s a smart card software.

Like just getting it there is like there’s enough friction of getting stuff like that deployed that it was definitely like, uh, like slowed the adoption of shielded Z cash for ages. And it’s just like the things that you don’t think about until like the rubber meets the road of like, yeah, I need a, a box.

That can call an API. That’s like sign this and it needs to like, do you have box? And you’re just like, no. And they’re like, do you have to do custom thing to support it? And they’re like, uh, I don’t care enough. and I blame the HSMs.

David: Mm-hmm right.

Deirdre: don’t really

David: right. It’s like a databases for scientists and researchers. It’s a zero billion dollar market.

Deirdre: yeah. Yeah. Unless Andy Pavlo disrupts the databases. he’s a database researcher, guy

David: I, I believe it was Stonebreaker that first called scientific databases, a zero billion dollar market, or at least that’s where I first heard the term.

Deirdre: Was that before or after he won the Turing award?

David: uh, around that time, I believe I saw him actually give, he gave a talk at, uh, at Michigan somewhere around

Deirdre: cool.

David: and that was when I first heard the term zero billion dollar market.

And I’m like, I’m gonna store that one for the future.

So you were in galavanting around Europe, doing cryptography in the past two months is my understanding,

Deirdre: I

David: um, living the dream and twice,

um, twice in no COVID either time.

Deirdre: twice in no, COVID, uh, you know, crossing myself, um, was there for real world crypto and Amsterdam. Super awesome. We did our episode live from Amsterdam. Um, you got the gist from, from that. And then I went back to Europe a couple weeks ago to go to Eurocrypt in Norway, which was very much like I finally got to go to like a, you know, IA, R flagship.

Academic crypto conference, cuz real world crypto is like a symposium. They do a lot of talks. There’s a lot of industry folks, Eurocrypt is like proper. Like, do you want three hours on universal composability of security proof simulation. here’s the conference for you? Um, do you care about the Fiat Shamir transform?

I’ll talk to you for, you know, three talks in a row about it today, tomorrow and the day after. Uh, no, it was good. Um, we were, it was in Tron time. Norway, if you have a, a good excuse to go to Norway in the spring or the summertime highly recommend it’s lovely, uh, beautiful weather. The sun rises at two 30 and sets at 10 30 at night.

It’s weird. Um, but it was beautiful. Um, yeah. Lots of universal composability, lots of multiparty protocols, which is, uh, fun for someone who’s doing a, a very small scale multiparty protocol in, in terms of threshold signatures, which is frost, shout out to Chelsea Komlo who came up with frost. Um, I, I implement everything that she tells me is, is secure because based on all these security proofs that she comes up with, um, and yeah, Fiat Jamir stuff, uh, zero knowledge proof stuff.

Um, A little bit of isogeny stuff. There was at least, I think I, I missed the last day, but there was a nice talk about like, okay, how do all of these super singular isogeny assumptions relate to each other stuff about endomorphism rings and stuff about computing, um, you know, ideals and Aqua and algebra and crap like that.

But I only barely understand, from Benjamin Wesolowski I think, um, cool stuff like that. yes,

David: this is, uh, one of those conferences where the submissions are all peer reviewed papers, right. With the program committee and then, you know, academic papers and so on where. That is, you know, not gonna be the case at, uh, at, uh RWC, which does, which is like people submitting talks or, or, or a black hat of which you and I have both reviewed for, which does, you know, these still have review boards that pick what’s coming in, but it’s not the same type of peer review that you might see at an academic.

Deirdre: Yes. Uh, RYP other, uh, academic cryptography conferences are very much, you submit a paper, the program committee figures out multiple people to blind review your paper. Um, and then they will either, uh, tell you just flat out. No. Or we want you to make some changes, uh, or just straight up. Yes, we accept.

Um, and usually you will go back and if they, if you make changes, uh, you, you get like a camera ready. Um, and then you make the very fancy paper show up. Also on a website, it used to be published in a BA in a, in a volume called the proceedings. And you can still get, you know, pay $20 to get the proceedings of a conference.

But it’s mostly to, just to like, get the final version of the thing. And it gets posted on the conference website and that is the one that got published at the conference. And then they pick a best paper and they pick, you know, you know, test of time awards. There were some cool test of time awards at, uh, Euro Crip this year.

I think one of them was literally the curve25519 paper from, uh DJB. Uh, and I don’t remember if it was co-authored, but with Tanja Langa, um, it I’m pretty sure it was that paper. It was basically how to do fast elliptic curve math, um, from DJB and it kind of like kickstarted the like, not just, uh, incomplete, short WeierStrass, uh, prime order curve rage, or that we’ve been on for at least, uh, 12 years or 15 years, or now.

David: just an absolute bender

Deirdre: Just an absolute bender of co-factor curves that are fast as hell, but they just bite you in the ass.

Yes um, yeah, DJB uh, got, got one of those test of time awards, which I, I think, you know, uh, that, that definitely has stood the test of time. And I don’t remember what the other two were. There were two other test time awards.

Um,

David: I would say the big difference between like those academic conferences and other conferences is like the point of like point number one is to like publish the papers. Like, and then the conference is like a side effect of

Deirdre: Yes. Um, the conference is very much an opportunity for the grad student who helped work on the paper, but it’s not like the big name, the advisor, the professor, the first author, whatever, to go to the conference and present their work and get their face out there and explain it, you know, present it to colleagues and people in, in their area, um, and get their face out there so that they can get known as they come up through, you know, research and academia.

And then if they want to go on and become a professor or go tenure track or do research, like that’s the start of them getting their face out there. Um, there’s been a lot of debate, especially around COVID of like Especially for the big one, which is called crypto and has been, always held in Santa Barbara, California for like 25 years or something like that.

There’s crypto Ry, Asia crypto. Those are like the big three. Um, and they are held approximately where they’re named for. Um, you are often required to send someone to go present your paper for a, like a 20 minute, half hour presentation, uh, as like terms of being accepted of your paper, getting accepted.

And this has been a point of contention because like, if you’re from the other side of the world and to get your paper accepted at the top tier cryptography academic journal, uh, conference they’re, they’re not big on journals. They’re big on conferences. You have to pay for a round trip, international ticket.

You have to pay for them to stay. You have to deal with visas. You have to deal with room board. You have to deal with meals. You have to deal with all that stuff. that can be exclusive to very good research and very good students. Um, and this is one of the reasons that kind of like the remote, uh, the conferences has been like appealing to a lot of people because they can present remotely.

They can present online, they can be present online, uh, in a way that they might have been excluded before. So now there’s been a bit of debate about like, can we change this up a little bit? And like there’s old guard that don’t wanna change things they want, you know, like they there’s, there’s decent reasons for this sort of stuff has been in the past, but also like, yeah.

Anyway, um, part of this is also brought in a proposal that was brought up at real world crypto to introduce a new proceedings of crypto papers. There’s the three major conferences. There’s some other IAC conferences. Um, there’s role crypto as a symposium. Um, and there’s like a journal for sort of like best of the best papers.

They kind of get elevated and elected to like this journal. Um, but this other thing is basically trying to be, something to give more access and more of a venue than just like the big three or some other conferences or like nothing, or just a, just a PDF on eprint. They’re trying to have something that’s a little bit more accessible and that’s also been, uh, an area debate.

It seems like it’s going forward, but, uh, we’ll see how it goes.

David: The rest of like the academic world is the journal model instead of the conference model, which is, um, as its pros and cons, but the, as the conferences have grown, they’re all kind of starting to approach journals. As you can, like the idea behind journals, as you submit over time and slowly, then eventually you get in versus the strict deadline associated conferences.

And now more and more conferences are having 2, 3, 4 submission periods per year. And it’s like, well,

Deirdre: yeah.

David: um,

as, as you approach the limit there, the, the sums turn into the integral, like.

Deirdre: oh God, uh, algebra jokes, um, security and Oakland and these other things. They’re still conferences, right?

David: Correct. So the, the big four in like computer security, like non cryptography or, uh, or maybe closer to the applied side of cryptography are Usenix security, IEEE security and privacy, which is also called Oakland,

um, CCS, uh, which is the ACMs conference on communication security, I think, I don’t know. Um, and

Deirdre: dunno. Mm-hmm

David: always in San Diego and ran by the internet society.

Deirdre: Hmm.

David: Um,

Deirdre: And there they’re conference conferences,

David: yes. Um, but at least, uh, Usenix and CCS, um, CS, um, have multiple submission deadlines in Oakland, I think

might as well.

Deirdre: right. Cool. Yeah.

David: so o historically Oakland had been like, The one that everybody wanted to stop going to, because for years I E E wasn’t open access and I E is still not open access, but S and P is, um, and so, uh, people have started going to that again.

And now, like, I think CCS is kind of starting to go out of phase just cuz it has gotten very, very large and uh, for lack of a better term, some of the cool kids have been like, well, if I can now submit to yous NS, when I previously at the time of year that I had to submit to CCS, I may as well just submit to yous NS.

Um, depending on your thoughts on like how quickly do you want the conference to be after the submission?

Deirdre: Yeah,

David: Um,

Deirdre: for, for people. So the, the crypto conferences I was talking about, they are very, uh, besides robo crypto, which is talks, uh, those are very academic, the, a lot of theoretical cryptography stuff. If you’re more interested in applied cryptography, computer security, implementation stuff, attack stuff, uh, all those conferences David mentioned are much more likely to publish that sort of stuff.

So like attacks on signal protocol or, or things like that. Those things tend to show up in conferences like that. So those have some cool publications as well.

David: I’ve always enjoyed, like use NS the, the most. Um, but there’s definitely a level of personal preference. Use NS is usually the week, like after black hat, it’s like the second or third week in August. And it moves around where it is. It’s usually in the us, or at least in north America. And.

Deirdre: no. Um, so recently we had, Apple’s worldwide developer thingy. And we had, I, I don’t remember if it was before or after that. I think it was before that, uh, Google’s Google IO, whatever their equivalent of worldwide developer thingy that apple does.

And I’m pretty sure both of them announced that they are supporting and working on the industry interoperable collaboration of passwordless authentication. And for those of us nerds who love, uh, unphishable, uh, authentication mechanisms are all of our ears perked up. And if you, if you could see me, I’ve been, I’m making big ears right now.

Um, because we’re all pretty sure that this is, uh, apple and Google, the, you know, two biggest maintainers and developers of browsers, of devices, of authentication things period saying they’re going to interoperably support, um, a new kind of Fido credential. Uh, that’s very similar to, U2F the kind of old thing where, uh, and, and now is supported in the web API called WebAuthN.

That’s apple has branded PAs keys, uh, but it has a much more convoluted, official kind of generic non apple branded name, but it’s basically instead of a password, which is just a character string that you type into just a random fucking field, or, you know, a query param in a URL. Um, you have a key pair and usually what it is is, uh, for WebAuthN or for, uh, FIDO2 or whatever, uh, or old school UTF, um, You have a key pair and it lives on your Yubikey or it lives on your authenticator device.

Like the, you know, the trusted enclave in my MacBook or something like that. The private key stays there. You, the website issues, you a challenge when you’re trying to log in, uh, you do like basically what amounts to a signature with your private key that you previously registered on. The thing that you’re challenging and you send it back and it verifies you.

And part of the thing that’s happening in this protocol is you’re bound to the website origin that the technical term, the origin from the request is coming from. It has to be, uh, you know, you have to verify the response, uh, against your previously registered credential, which means you can’t phish it. Um, you, and like it comes back with all of these things bound together, and this was great.

Uh, for second factor, um, so that you cannot just copy paste this thing into a, you know, a field or a query param because it wouldn’t work. It has to, if it was a phishing website, it wouldn’t work against google.com. If someone is at, you know, like g00gl3.com, but with all the OS or zeros and the E or threes and things like that, it just wouldn’t work because it would not compute correctly.

That was great when you just had it on your Yubikey or your phone or whatever, but what if you need to back them up and the answer has been for a long time that you just register a ton of keys. So like I have six Yubikeys and I have, you know, multiple devices and things like that. And the answer is you don’t migrate the key pair.

You just have lots of key pairs and they all work and okay, fine binders full of Yuba

David: I, uh, I wanna, I, I wanna make one kind of comment about the phishing, because I think there’s, there’s a subtle point that people miss that also makes the like whole phone thing a lot more complicated. So if you think about, uh, right, the Yubikey and when you’re signing in with WebAuthn, right, you’re more or less signing over the website that you’re trying to, um, log into and the challenge that they give you on your computer.

And then it gets sent back up and everything is great, but there’s like another component that we’re, that happens auto automatically when you’re using say the Yubikey or using the, uh, touch bar, um, on your MacBook, which is that like, the thing that you are using to authenticate to is physically connected to the same computer that is running the web browser.

And this is like enforced by virtue of like the browser only knows how to talk to the Yubikey that’s currently plugged into it or, or the enclave or whatever. Um, when you take that. And you’re like, okay, you know, my phone has a secure enclave and it can sign things or like I have a go program and I implemented the same spec, whatever.

Right. Um, like it’s just a signature. It doesn’t need to be in a hardware. Um, as soon as you move that to a different device or like the communications channel to something that isn’t obviously directly connected, um, you end up with this problem where when you go to like, approve on your device in some way to click, yes.

I wanna log in on your phone. Like, let’s imagine a crypto game, you’ve got two laptops. Um, you can’t see the screen on either. Um, and you win the game and, and you have a phone and you get a login prompt and you win the game. If you can have an over 50% accuracy when saying which laptop is logging in, right.

You don’t actually know that the laptop or the computer that you’re next to is the one that you’re logging into. So while it prevents you from being phished on like, you know, G zero, zero gle.com or. Um, it doesn’t prevent you from being fished in the case where the attacker is trying to log in at the same time as you and sends you a

prompt,

Deirdre: Yeah, yeah. Um, and, uh,

David: unless you go through the effort of doing a preregistration step between the browser that you’re using.

Um, and, uh, and then like the phone, you need some sort of out of band thing to pair them together. And there’s a whole host of ways to do this. People have proof of concepted it with like relay servers, a QR code, and a Chrome extension. Um, you could do it through, uh, like Chrome sync, um, you or iCloud sync, which is more or less, I think what these proposals are doing.

Um, you can do stuff over Bluetooth, combined, usually with a sync mechanism as well to add even more. Um, uh, but the, it, the world gets a lot Gnar, um, and the user experience gets slightly worse when you, um, For some definition of a worse, when you switch from a Yubikey to a phone, the benefit is you don’t need to, to, to like, uh, explain to somebody, um, uh, or buy them a Yubikey, right?

feasible for an enterprise to buy all their employees, Yubikeys. It’s not feasible for like the IRS to buy everybody that needs to file taxes at Yubikey and expect them not to lose it.

Deirdre: And all of this is for, uh, using phyto credentials as second factor or multifactor, not your quote unquote primary factor, which has been, and probably will be for a long time, passwords. Right.

David: Yeah, but I mean, fundamentally like these end up being the same thing.

Deirdre: Right. So the new thing is the. Passwordless authentication stuff is taking the same challenge, response sort of protocol. And using that in addition to these, uh, device bound credentials, the classic FIDO2 UTF, um, WebAuthn multifactor, uh, thing. And instead of a password, you have this key pair that can be synced because it’s not device bound such as through iCloud key chain.

And that that’s like the big proposal, is that apple has kind of like created this end to end implementation of how you do this. And everyone else is like, oh yeah, we were thinking that. And then like, you’ve basically. Done the proof of concept. And they’re trying to, they’re, they’re trying to get everyone on board because if just safari supports this and just iOS supports this, it’s gonna be it’s.

You need websites to support this API for it to work at all. It doesn’t matter if you’re apple and you own the entire stack top to bottom. If you go to google.com and it doesn’t work, if apple PAs keys don’t work. Yeah. You’re supposed to be magic. So there app Google’s on board, uh, Apple’s on board. This is a W3C, WebAuthN update that’s coming.

but, uh, apple Apple’s kind of gotten out of the gate and, and started with the great branding. Like they there’s like a four word name for these things. And instead apples, like let’s just call instead of passwords, they’re past keys. It’s

David: We’re just calling him pass

Deirdre: it’s like, that’s great. Damnit, apple, like, so.

David: and I think in the future, we’ll, we’ll try and get one of the authors onto like go in more depth on this, but we just wanted to, to call out that this was in fact happening.

Um, the authors know who they are and may or may not be surprised by me saying this without emailing them first.

Deirdre: uh, yes. Uh, we want to go into a lot of depth about this because I have more like very specific questions. Like I was looking on the Fido, like website and I was trying to get very specific technical details about this. And then people were like, ah, they’re not there. They’re like over in this other place.

And also these people have their names all over it, so we should go talk to them. So that’s a thing that’s happening. That’s very exciting for security. It’s we really hope to see this getting rolled out well and nicely, and in a way that, you know, human users can understand the benefits to their security.

Um, so early days, very exciting. Uh, God speed.

David: Um, and then one last thing we wanted to touch, um, or mostly I just wanted to touch. And then for steer here to talk about was, uh, uh, the discourse from like, uh, a month ago or so when, when, um, Elon Musk in the process of buying Twitter,

um, which will just move past that. Um, but, uh, was like Twitter, DMS should be E to E end to end encrypted.

Um, and then the end to end encryption on the web discourse started again, um, of like, is this even PO, is it even possible to have a web app that does end to end encryption and has the same like threat model? Um,

Deirdre: good.

David: Uh, and so, uh, Deirdre, you’ve thought about this a lot, I think. And so maybe you could explain some of, uh, why this is a little harder than it’s just like generate a key and, you know, call uncrypt and like signal does this, we know how to do this.

We know a lot about like messaging protocols, just do it in a web browser. JavaScript’s turn complete.

Deirdre: oh God. Unfortunately just slap a signal protocol on it does not necessarily give you the same, uh, security guarantees in every deployed software environment. Um, and by every, I mean, there’s the web and there’s everything else or there’s everything else. And then the, then there’s the web, um, I’m being pulled into this because people are like, yeah, they should totally do that.

They should end to end crypt Twitter, DMS. And if Twitter was only supported on. Um, iOS, Android, and, you know, a desktop application that could be very doable. Unfortunately, twitter.com is a web app. That’s what the.com stands for. Right? Um, it is primarily, yeah, the guy who works on the, the web browser tells me that I, I got that one.

Right. Awesome. Um, it has, it was primarily a web service for a very long time. And then it got mobile apps and then it got, you know, like tweak deck or, you know, tweaky bot or, you know, whatever the fuck it had an API, and then it shut it down. Um, for a long time. It like the reason it used to be only 160 characters is because it was like you could SMS a, a, a code and it would tweet it on the internet.

So it was like an SMS based plus web service for most of its lifetime. And then it slapped on a bunch of mobile clients. Um, End to end encrypting anything where you have to support that end to end encryption with a web client is a fundamentally harder thing to do than if you just have mobile clients and I’ll, and I’ll the, the primary, uh, comparison is WhatsApp.

Uh, WhatsApp started as mobile only and eventually added a, uh, a mirror of a web client to a mobile app that it was paired to. And now it has a more evolved kind of web service to go along with it. But web WhatsApp. Deployment for almost all its history was iOS and, and Android clients. It was Ava. It was able to slap on signal protocol onto that service because all of its clients were mobile clients and they were compiled apps in very constrained environments that didn’t auto load, uh, content and scripting from the internet.

Every time that you open the app.

David: that, that’s the thing that makes it hard, right. Is the fact that like you have, uh, you have the security boundary of an app store, which like

Deirdre: Yeah.

David: the security boundary of, or even if you take away the app store, you have like, you know, a piece of software. And if you make the assumption that like, people are able to get the piece of software right.

Once, and then it more, less stays, stays like that. Um, now you’re talking about like a malicious update through the store, whereas on the web, it’s like, you know, you can just load resources from other sites. That’s like the whole value proposition of web. Um, that’s why it’s a web and not like an app, right.

Connected like a web

Deirdre: Like you can pull in images and they get rendered and you hope that your browser render is, uh, you know, decoder is great or you just literally pull in a whole nother website. Uh, and you can either do that in an iframe and you hope that the, the isolation is good or you just literally do an HTTP get and you get a blob and then you could do whatever the fuck you want with it to a blob.

Like the dynamic nature of the web, a web page, a web app is like the point. And it’s just fundamentally different in terms of the security application platform that you write software for than a mobile app than a desktop app, even. Um, and it, and therefore makes the traditional. design of end to end encrypted protocols, especially signal which relies on long term identity, keys being stored and secure, and that you, uh, tie all of your, um, security back to in this like, you know, double ratchet way, um, fundamentally harder and just a different difficult proposition.

If Twitter only had mobile clients and, you know, pieces of software that they signed and released and only changed when they pushed a new update and didn’t get reloaded from the server every single time that you, you loaded the app, like you basically do on twitter.com or tweet deck or whatever it would be.

And, and this is not even touching. Like the APIs that you get from iOS or the, the, the hardware back credentials that you get from an Android device or, or any of that stuff. It’s just the fact that like you could just store your long-term keys in your app memory with the guarantee that like, it’s just me, it’s just me and my, you know, sandbox name, space, memory space for my, you know, end to end encrypted twitter.com or, you know, Twitter app.

You it’s just so fundamentally different for the web

David: hell is other people’s JavaScript.

Deirdre: apps. The fucking, so, you know, my, my basic argument is like, people are like, you should end, end crypt, Twitter, DMS. I’m like, okay, you’re probably going to achieve that for all these mobile clients and web clients would not get end, end encryption. and those secure way to make sure that everybody who gets, who can log in to a Twitter service is twitter.com.

Web web users don’t get DMS at all because otherwise you could, you, you open yourself up to downgrade attacks. Yeah. Yes,

David: Yeah. Now, if you back up the threat model, some, so like, what you’re describing is like what you would need to be close to signal in, in a world where you really don’t want to trust the provider at all. If you weaken the threat model to something like, I don’t want the engineers or the ops team, or whoever to just be able to like, run select star from DMS in their SQL database, um, you can do pretty well right now. Um, but then you run the risk of, you know, all the code could just be swapped out or they could leak it in other ways. Um, they could, and, um, you can have healthy and vigorous debate about how different that is from, from native apps that could also just post, you know, signal or WhatsApp could just post the plain text, um, over HTTP to a server.

But as far as we know, they don’t, and we have no reason to believe that they do. And presumably somewhat would notice if that happened.

Deirdre: Yeah, you can watch, uh, all the connections that your app makes out and you can measure how much space there’s in there and, and frequency and see if there’s anything going on. And if they’re, you know, You can inspect both the decompile binary and the behavior of a compiled binary, um, to see if it’s doing something funny.

Um, and we have no evidence that any of these apps are doing anything funny. Although, you know, apple and, and iOS was saying, oh, we’re going to, we’re gonna run these like image detection, models on your phone, and we’re gonna snitch on you when you upload them to iCloud. Um, so that we could say that we deployed end to encryption on iCloud, but we’re gonna detect if you’re, uh, if you have child porn in your computer.

Um, but

David: for more

context. See our episode with Matt green.

Deirdre: Yes. Uh, but they say they haven’t rolled that out. And I believe that they haven’t rolled that out because everyone was very mad at them when they said they were gonna roll that out. Um, you can detect if, if these apps are. Doing something that you think they shouldn’t be doing? We have no evidence that signal WhatsApp end to encrypted Facebook messenger.

Any of these other end to encrypted messengers, uh, are doing anything like that. Um, much more likely to literally have a report button like WhatsApp or, uh, you know, using Facebook message franking or whatever it is, um, to explicitly get information willingly ActionAlly from users, um, which is a whole nother discussion about like how do you deploy end to encryption on systems and still run those systems and run those.

Communities that are made up of humans that may or may not treat each other well or may or may not abuse these report capabilities. Um, how do you do that in an end to crypto context? And like it’s not necessarily harder. It’s just different and see <a href=”https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3920031”Riana Pfefferkorn’s paper about content oblivious, moderation tools</a>, uh, for more on that.

Um,

David: I think that’s about all we got for today.

Um, I think we’ve got a few plugs. Um, the first one is we continue to have merch

available at merch security, cryptography, whatever.com.

Deirdre: Mm-hmm we

David: and we now have merch that isn’t even all black.

Deirdre: Yes.

David: so we think that’s.

Deirdre: You’re welcome. It’s got these cute little guys that you might have seen on some of our, you know, headers and images and stuff. Um, if you like things that aren’t all black, we have them, um, and also mugs and stickers with cool stuff. We just like, we wanted merch. We have merch.

Now you can have merch too. So that’s what that is.

David: Um, several or maybe even all of us, uh, will be around black hat and we are tossing around the idea of doing some sort of event. So if that sounds appealing, please let us know because that will motivate us to actually think about it. Um,

and you can figure out the best channel with which to do so.

Deirdre: I will be in Vegas for Z con, which is a Z cash foundation event about privacy. Uh, so I’ll be there for that. I probably will be around black hat, but not at a lot of black hat. And then I will be trying to go to DEFCON, but I’ll be there that whole week of Vegas. Let us know if you wanna come more thing

David: and now Deirdre, I’m gonna make us a real podcast by doing something that, um, all podcasts much do eventually, which is plug another podcast.

Deirdre: oh my God.

David: uh, on, on, uh, behalf of my partner, I have to plug the podcast, A Star is Trek Becca Lynch, who has never seen any star Trek is taken on a tour through every single series with two handpick episodes by her friend, Jess,

who has seen all of them.

And they are doing a whirlwind tour of all of star Trek. Um, I’ll correct myself.

Deirdre: were doing this.

David: I’ll correct myself slightly. For some reason, she’d seen this season one finale of Picard. That was the first

star Trek that she saw.

Deirdre: Oh no. God help her.

David: um, a star is trek available wherever you get your podcasts.

Deirdre: I’m subscribing right now, actually. Um, I watched a ton of TNG reruns back in the day when I would get home from school. So I’m looking forward to this subscribed. Cool.

David: right. And find us on Twitter at SC w pod. And I think that’s it.