Live from Amsterdam, it’s cancellable crypto hot takes! A fun little meme, plus a preview of the Real World Crypto program!
Oops. All zeroes!
This rough transcript has not been edited and may have errors.
Deirdre: Hello! Welcome to Security, Cryptography, Whatever! I’m Deirdre.
David: I’m David. and, uh, nowadays I actually work for Google. So that obliges me to say that this podcast is not the opinions of Google.
David: Thank you. Um, if you don’t pay attention to my Twitter, which I don’t recommend paying attention to my Twitter, you, uh, probably missed, I started as a product manager, on Chrome security a little under a month ago. and so, you know, I’m excited to be adding a lot of value.
Deirdre: to your organization. Well, we consumers of the most popular browser on the planet, mostly because of Android. Uh, wish you the best of luck because our security is in your hands. As it were.
David: Um, thank you. And I don’t know that we’ve I don’t know what numbers are public about that usage. So
Deirdre: Cool. Well, today we saw a cool meme going around on Twitter, and we decided to just jump on that and talk about it on the pod, because we think these cancelable crypto takes are ranging from spicy to milque toast.
David: yeah. So Tony Arceri I’m bascule on Twitter or B A S C U L E. That’s probably a reference to something that I don’t know, posted ,"we’re canceling each other over cryptography takes today. Post your cancelable cryptography takes." And we’ll link the tweet in the show notes. And this was on
April 8th, 2022.
And we are recording on April 10th, 2022. And hopefully this podcast comes out soon
We’ll see when it comes out! One of the ones that we’ll go with first that I was amused by as well. So this, this came out and I was like, cool, cool. Let’s let’s let’s see, let’s see what people come up with. And like, a lot of the replies that I saw were not cancelable takes, they were good takes and they should not be canceled.
And I was like, wait, am I getting the meme backwards? Like, is it, is this sort of meta where it’s like post your cancelable take and you put you post the good take and everyone gives you credit and shit? But finally, David Adrian posted what I would say might be a cancelable take, which is that "agility is necessary for TLS".
And my first instinct is no, the only agility you need in your TLS negotiation is in the protocol number. So yeah, canceled.
David: So this, this take actually goes back to, uh, to, my PhD defense, um, with our previous guest, Chris Peikert he was actually the person that, kind of forced me backwards on this, um, during the QA. And so when I was defending my PhD and I guess we could even link the slides in the show notes, uh, one of the things that you know, was versioning the protocol rather than having agility, because the, the connection to measurement there, which is what my. PhD thesis about was about, is that basically anything that’s measurable about a protocol is something that can go wrong. Right? If you have agility, that means you can measure like what the things, what ciphers or algorithms a host supports. And then usually like within a protocol, anything that you can measure is in fact, a knob and any knob is something that can go wrong. So I was making the argument that like, there’s also a loose correlation between like measuring a protocol, being able to measure a protocol in practice and like the chance that it would go wrong. Because if you look at Wireguard there’s not really anything to measure. There’s like, ope! That’s a wire guard host and that’s it.
And then, because you need to know the key in advance anyway, you won’t find them if you’re
David: So that was one of the things that I said was you know, versioning: good, agility: bad. And then Chris was like, "There’s absolutely no way that you’re going to be able to get everyone to upgrade on the internet all at once."
So you’re going to have to support at least two versions.
David: What do you do then? And so I think even a better version of that kind of take is if you just admit that within the public internet, there’s going to be at least two classes of devices. Let’s say there’s phones and there’s desktops. And on phones, we generally like, cryptography that can be done fast on an arm CPU and on desktops, we tend to like cryptography that can be done fast on an Intel CPU. and in practice that tends to turn into things done in software, on phones and things done with AES instructions on Intel CPU. which is why you see like cha-cha poly on mobile phones and then you’ll see AES- GCM or whatever the current hot variant of AES is, um, on, uh, on desktops because all the Intel desktops have the AES -NI um, and phones mostly don’t.
I will say arm version eight has some really cool instructions that you can use to do, like, XORs of multiple words at once in the same way that uh SHA-3 and other sponge constructions tend to do. so that’ll be really cool when arm eight is actually in small
Deirdre: And now that we have more laptop devices that have an arm core inside them, maybe it’ll just kind of smooth out across multiple device classes. So, but anyway, yes, we have up until recently we’ve had dual classes and we’ve had to dual wield our ciphers mostly because of micro architecture.
David: but I mean, it’s micro architecture, I would say it’s the implementation detail in this case,
right? It’s it’s you have multiple classes of devices and you have, you know, you’re not going to get everyone to upgrade all at once. And so I think in the public internet world, you need to have a small amount of agility. And so I would say you don’t like TLS one, three levels of agility where there’s basically like, two options for ciphers and two options for signatures, but there’s more than that. But, you go AES or you go ChaCha-Poly plus, or minus some key lengths. You do want RSA signature, or you do an ECDSA signature.
And then there’s a couple of groups I think that people use for key exchange.
David: And so.
Deirdre: So is, so that presupposes that within a single version within TLS, one, three, you have agility for your key exchange, your signature for your cipher suite in total. And the agility here is just like two things, maybe. You’re a cipher. You might have AES you might have ChaCha-Poly, your, your key exchange you might have.
I think they left in, uh, was it, they left in finite field? I think they got rid of finite field
David: they got rid of finite field, I
Deirdre: Okay, good.
David: or maybe it’s specified, but like it’s not considered to be supported.
Deirdre: I, the, the only reason I forget is I know the banks were using it for auditing or something like that. And they wanted to have a hard coded, finite field, like key or something like that, I forget.
But anyway, this is like agility within a single version up until TLS one dot three. And even still it’s basically been like almost everyone supports TLS 1.0, and TLS 1.2, because if you can support 1.2, like you basically are left with 1.0, because one.one is somewhere in the middle and it’s this bi-modal set of device classes.
if you support one, you don’t support the other, so you have to support the two of them. Does that kind of turn it from, crypto agility of having multiple versions of TLS into one version of TLS, but agility within that one version? Is it just kind of moving the ball?
David: Well, so the version negotiation was always worse than like the extension or cipher negotiation for some definition of worse. And that’s probably not actually, I might even walk that back because there are probably more vulnerabilities. And then in terms of like, oops, now there’s plain text with the cipher, negotiation, but like the version negotiation has never been like, secure. There’s— there are so many ways to downgrade it. And so the work around with that is like the specific implementation behavior and then moving the version negotiation to an extension, which then kind of turns the, cipher negotiation and the version negotiation, all kind of work more or less the
Deirdre: Because now it’s like pinned to the last TLS 1.2 version, right. In like the top level where used to do version negotiation. And now it’s literally: what ones do you support in a set? And you find an intersection of sets between the server and the client, right?
David: Yeah. specific, uh, things that you add in, if you’ve done a downgrade at the end to signal that you’ve been downgraded, and when you’re doing one three, you set the version, the version number in the packet of 1.2, and you set
your actual supported version in an extension
David: and then there’s a, been a bunch of really, really clever, like a bit mangling to make it so that… so that was in the ClientHello. Clienthello is basically the same, right? We’ve just added an extension. In the ServerHello if it’s speaking TLS 1.3, there’s— people did some really, really clever byte stuff to make — TLS 1.3 one, still look like a TLS, 1.2 one, if you didn’t know that one three existed, but kind of everything, meaning different things. Well, not quite meaning different things, but some stuff zeroed out and. You think about it with your one-three hat on rather than your one too.
David: so I don’t know that like I would consider all of that to be agility, whether it’s version negotiation or, or cipher negotiation, because the version and those are kind of the same thing.
And, like the way you do it with no agility is you pick the version that you support and then like, that’s it. Okay. What, where you have separate end points for each version, and then you like you maintain both for some amount of time, and then you turn off the old one when you can. Which is totally reasonable to do for your VPN.
Deirdre: I would say you are not canceled today
David: I’ve been canceled, but it’s unrelated.
Deirdre: for this, uh, for this take. Okay. yeah, At least in the way this is done in TLS 1.3, I prefer this quote unquote agility outside of the version level ,than what TLS had before, which was, try and do agility at the version level and the extension level I don’t know. It’s fine. Cryptographic agility in other places is just so fraught.
David: yeah, if you’re not TLS, I don’t think you should have any agility.
Cool. All right. well, we got, we’ve got another one. This is from Nick Sullivan, from CloudFlare and he says, " cryptography projects driven by single highly opinionated person are cult objects, not foundations for an industry." And I don’t even know if this is like a cancelable take.
It’s just sort of true.
David: unsurprisingly, I think there’s a lot of, nuance or missing information in that tweet. Does that mean, uh, if, if Thomas were here he would be like, well, I don’t think IETF CFRG is a foundation for an industry either.
Deirdre: fair. it’s kind of tough because… here’s my interpretation. There’s like a whole family of cryptography that is considered both good, but also it’s like all affiliated with DJB. , Dan Bernstein, Dan Bernstein’s introduced a lot of good new cryptography where new as in like the past 10, 15 years.
And it’s new and better compared to kind of AEs GCM compared to , the original elliptic curves like P256, and the NIST curves, it was short Weierstrass curves with incomplete formulas. They have various foot guns and they’re slower and all the sort of stuff. Let’s leave aside any of those sorts of things we’ve talked about, about Edwards curves, curves with co-factors that sort of stuff.
They’re generally considered an improvement. It’s all DJB crypto. It’s all like crypto from like one guy. And even if it’s all pretty good, it generally has a little bit of a vibe. That’s a sort of like, let us do everything that this one guy says and we will be secure. And it was like, okay, like,
is that what we want to do?
David: how, like, I know some of how that played out, right? Cause like X2551 9 was like, oh, somewhere in the ‘06, ‘0 8 range when that was published, I want to say?
Deirdre: Yes. Oh, 6 0 7.
David: on the latest side.
David: which like, if you think about the state of the internet, right? Like 2010 was the one that one Firefox extension came out that let
you, think it had sheep in the name or something.
I don’t know, but it let you pick up, uh, yeah. Firesheep. It let you pick up cookies off of wifi. basically it was for logging into other people’s Facebooks.
Deirdre: Because no one was using TLS.
David: it’s kind of what caused facebook to switch to HTTPS
and you know, around the same time, like, I don’t know when Google switched to HTTPS. I don’t know, like a lot of people are certainly involved. I don’t know, like how much of it was, was it driven by like Adam Langley? But like we know that like Adam wrote the Donna curve2551 9, library, to use it Google, um, and then open source that, and then that, was the basis for a lot of other implementations.
Deirdre: There’s a ton of, uh, curve25519 implementations in many languages that are kind of downstream descendants of the Donna library.
David: I don’t know why they would’ve done that for TLS at the time. Like that doesn’t make
sense, right. Because it wasn’t 2, 5, 5, 1, 9. Wasn’t a.
Deirdre: no. Yeah. Uh, the
David: of TLS for awhile.
Deirdre: the IETF spec for that came out at least in 20, out, late after 2016 or something like
David: so, But I think that action is one of the things that kind of popularized a 25519. And then at the same time, people were worried about the patent situation around some of the
NIST curves. I think even though
David: they either don’t apply or they’ve expired, or they’ve been said they don’t apply or all combination thereof, the patents had to do with point compression, not the
Deirdre: Oh, yeah, you’re right. There was a Blackberry patent on stupid point compression. And it was just like, it’s just literally like have this bit set to like one zero or like negative one or whatever, for whether you’re including, if you have a, , an elliptic curve point and a fine coordinates, it’s like X and Y.
If it’s on a curve, you can just say whether it’s on the plus or minus side, or if you’re Y and like, you can just eliminate, you know, 255 bits of, uh, of data and just have plus minus or whatever. And someone at Blackberry who in the early, I don’t know, early two thousands or late nineties was doing elliptic curve stuff, patented this technique and you couldn’t use it for a long time.
something like that.
David: Yeah, so, I mean, I don’t, I don’t know the details there and I don’t want to speculate, but right. I think that trend kind of pushed people towards, uh, some of the DJB stuff. And then thinking back to like 2014 ish, like it was all the rage by then.
And you were starting to see, I don’t know when ChaCha-Poly was published, but like that was going on Android.
when like, oh, what if we use these on mobile phones?
especially because RC4, like
David: RC four kind of went out of phase in the 2012 2014 range. Like people knew what
had issues, but people didn’t really care about that until 12, 20 14 and not in CBC mode is getting,
you know, it’s shit kicked out of it by, uh, by everybody.
Um, and there’s all the beast stuff from Thai.
and so I think that’s just, just people were looking for alternatives.
And what you had was AES-GCM and ChaCha, which are both effectively stream ciphers.
Deirdre: I do have to
say that like a lot of his, a lot of these like original publications, like the original, the original, you know, curve 2, 5, 5, 1 9, and then later at 2, 5, 5, 1 9. And I forget when the cha-cha poly stuff started getting published, but it was ready and it was fast and it, you could do it in software for the ChaCha-Poly stuff, as opposed to needing an instructions to make it fast.
and then Snowden happened and people are already a little suspicious about they were, they were a little bit like. These NSA curves or these NIST curves. They’re not really NSA curves, but like they have, they’ve randomly hashed some value to get a coefficient. Um, and some of these NIST curves and people who are always just sort of like, we would like more information about how these curve parameters were determined And then Snowden and you know, all this stuff with Dual EC, which was another NIST standard that we have strong evidence that, uh, NSA was fiddling around with this random number generator that had elliptic curves
David: a one, two punch on Dual EC. Cause there is the like, 20 circa 2013, maybe early 2014, like where someone, directly connected like this thing from Snowden and like this action at RSA, the company and
Dual EC, like they paid RSA the company to put dual ECN, something like that. I don’t recall. and then in 2015 or late 2015, there was the Juniper, "oops! Our
Deirdre: a S
David: and we don’t know why.
Deirdre: in our ER, VPN software. And like, you know, you can speculate as to who, but it was just like, they were able to go in, make a change to your software and like, oh, look, see, they just changed like pieces of your, , internal, Dual EC parameters that theoretically would let them backdoor all of your VPN traffic, because they’re able to predict your, your nonces and all your random numbers.
David: The conspiracy theory, there is a, that, plus BGP hijacks is what got
people into, uh, OPM. I
Deirdre: shit. Really? I didn’t even think.
David: a crew of people that think that
Deirdre: I didn’t even think it was like that you didn’t even need to be that hardcore
to get into OPM.
But, uh, so th this is just kind of all context for why people were ready to distrust NIST stuff like the NIST curves and move to something that came from not NIST, not NSA influence, not, you know, patents, possibly encumbered, you know, not all the stuff that we’re basically the only options in town up until, uh, all this sort of stuff started kind of coming to a head.
And then this guy who has, historical record of going to war with the government So he’s definitely not in league with NIST or the NSA or anything like that.
David: yeah, he sued the government about the export restrictions back in the mid nineties.
Deirdre: The result of Bernstein versus whoever is basically what says that, you know, code is speech. And so it’s regulated, like you would regulate speech in the United States, which is pretty cool. ,
Anyway, his cryptography, considered good and fast and performant and not encumbered by all of these sort of forces, becomes very, very attractive.
Unfortunately, it’s kind of just him doing, doing a lot of great research and publicizing, a ton of great stuff. , and then it just becomes kind of like a one man show. His research has led people to come up with all sorts of new curves and to iterate on stuff like ChaCha-Poly and we have new hash functions and all this
David: yeah. And at the same time, like from like 20, we’ll say through like 16 ish. I think the cryptocurrency people got very interested. and speeding up, some of a subset of the NIST curves, and I believe some of those speed ups that applied to the ones that you see in TLS,
David: for Bitcoin
Deirdre: But yes, sec sec P 2 56 K one is a NIST curve. Uh, it just, wasn’t a very popular NIST curve until, uh, some dude picked it for his, uh, , signing algorithm Bitcoin and then it became a very very popular.
David: Yeah, but like as a result now, and I think, I remember, I want to say Matt green at some point, like tweeting about this. In 2010, you never would have expected, like there to be a solid, like NIST P curve library in every language on every platform. And by like 2016, 2014. There was one everywhere. Um,
David: even Java
Deirdre: Yup. And Java.
Thanks, Bitcoin. I mean like, you know, people give shit to cryptocurrency, but it’s actually produced lots of interest in good cryptographic implementations. And it’s just overflowed to all the other, elliptic curve cryptography that we’ve been using. So that’s, that’s a boon.
for, for this take, I’m not, I’m not saying he’s wrong. Like, I don’t think it’s a very cancelable opinion, not, uh, not foundations for an industry. I mean, I don’t, I don’t, know? Like a lot of, a lot of good analysis has been done for,. the Edwards curves and Montgomery curves, I think I’m less familiar with the cha-cha poly— poly cryptanalysis, but I think we’ve kind of realized that yes, these complete formulas with really fast, uh, speeds that give us, you know, , X only arithmetic.
And we, we have a lot of benefits from the curves that, , DJB came up with. But I think we’re realizing the co-factors are also very sharp edges, but we’re learning from that too. So,
David: Academia is great for enabling like the foundation of the industry in the sense that it enables the industry to exist. " New foundations in cryptography" or whatever the name of the paper, but those that introduced like Diffie-Hellman for sure.
Very like that type of stuff is amazing. Right. But it’s not necessarily clear to me that like, uh, especially as someone who’s done academic and like implementation focused academic work in the past that, you really want what the academics are doing to be foundational in the sense of like, of like engineering foundational to, uh, to what industry is doing. Um, and then the flip side of that, I would say for academics is to like, be very wary of pitching, like your thing is the real world or like foundational engineering thing, unless you’re really sure it is. Like, I think that’s been a common criticism across a lot of the stuff in. Well, with regard to like ed25519.
And so on that, like, it gets pitched as here’s the super fast, super usable, elliptic curve, uh, algorithm. But actually the spec is in the code and good luck running it and it uses a custom assembler
Deirdre: and, you know, you look at the code and that’s slightly different than the paper, and there is no other technical specification and you’re like, which 1:00 AM I supposed to do? And there’s no security analysis of the thing that’s actually implemented.
David: And so I don’t think it should be like on academia to necessarily provide all those things, but you probably shouldn’t be framing your thing is like foundational engineering. If
Deirdre: Yeah, just sort of like my research results solve all of your problems and you don’t need to do anything else with it other than just like take it and slot it in your, you know, TLS library or whatever it is.
One thing that I kind of want people to kind of approach when they’re doing anything with cryptography, in terms of like, Hey, this is a new curve or new primitive or new version, like a new thing
You know the application in your system that you’re trying to, what you’re trying to do in the context of your system, whether it’s TLS or a messaging library or whatever your software is. It’s kind of on you to do the security analysis of the new thing within the full context of the system, because you have to analyze above the abstraction boundary and below the abstraction boundary, because that’s where the security bugs lie and the author of a paper,
who’s just trying to get a good result and get it accepted at some conference or journal is not able to do that for you because they just do not have the same context. So there’s only so much that the author of new whizzbang crypto primitive can do for you. They can’t do your entire job for you.
So, be careful.
Touching on this take, I think, , cause we were talking about the co-factors curves, our recent guest, Sophie Schmieg also also has a small spicy
David: her episode
Deirdre: no, uh, it’s, it’s literally on my laptop and I
swear they’ll get out probably after real-world crypto.
I don’t know. We’ll see, uh, spicy take:" P256 is fine, and co-factor problems are way more severe than having to check for a point on the curve." And I’m, I’m willing to say this is not a cancelable take. Although some people, , may think that. And like if he told me that in 2016, , I might’ve agreed with you.
I might’ve agreed that this is a cancelable take. , but , I do think it’s kind of hard to look at, P256 with incomplete formulas, which is what we had when Edwards curves and co-factor curves were basically introduced, with, Edwards curves, cofactor curves that do have complete formulas, but you have to deal with co-factors in your code.
And like wherever you’re slotting your curve into a, whether it’s a signature or a key exchange or whatever, whatever you’re doing with it, it’s kind of apples and oranges. And I would re— like, it’s hard to really do a good analysis of like, well, incomplete formulas mean you have to check their points on curves and you have to do all these checks, versus co-factors. Do you have to do other checks? Not necessarily the points on the curve, but that you’re multiplying out the cofactor only when you need to, I’m doing a library right now where you have to do co-factors for only some of your operations, uh, or else your verification is bad in, in slightly different ways.
David: if you limit it to just signature verification, how, like, I haven’t dealt much with like, at the level, in which the co-factors matter that much, because if you’re just in signature verification, the effective result is you end up with more than one. more than one valid signature for a given input?
Deirdre: One thing that that is annoying is that you can be, you can verify a signature, fine if you’re just verifying it one at a time , even if you don’t multiply out the co-factor, but if you’re verifying them in batches, you will get a different result, if you don’t handle the cofactor. This is for signatures like ed DSA.
and when you are in a blockchain setting where verifying signatures is crucial to consensus, this is very bad. this is just one of them. There’s there’s other things where it’s just sort of. When you write protocols that assume a prime order group, and you’re using a non-prime order curve you have, and you’re trying to target a prime order subgroup.
And then the, you know, if you, you factor out the order of the prime order group from the non-prime order curve, you get a non-prime order cofactor, the, the abstraction mismatch just tends to bite you in very subtle ways, such as, yeah. So that’s kind of the thing that pops up.
David: if you’re not like building it as part of some other protocol where these happen and like how like, like let’s put our 2010 hats on when Bitcoin existed, but no one cared about it. like you’re just doing like, were people doing batch signature verification? Like what are applications of batch signature verification that aren’t blockchains?
like what were, was there, was anyone doing anything with these curves besides just individual signature verification?
Deirdre: co-factor curves. Uh,
David: With like an Edwards
Deirdre: oh, um,
David: at 2, 5, 5, 1.
Deirdre: key exchanges. The one that I think of first key exchange long-term keys for like signal protocol or something like that. things that aren’t necessarily, uh, batching. Having the x25519 byte based, this is how you do, Diffie-Hellman with a Montgomery curve thing that DJB published as part of curve25519 was very good because it kind of took a while because it has to do with clamping and getting rid of the co-factor for you explicitly and kind of this ugly way of just shoving off some of the bits it got rid of the co-factor for you, but it was explicitly described how to do Diffie-Hellman with this curve in a way that handled the co-factor for you.
That was very good because it could, you could very easily see a scenario of you just slotting in this curve with a co-factor of 4, uh, into regular schmegular Diffie-Hellman that you used to do with, I dunno, P 2 56 or whatever, and just not handling the co-factor and someone gives you a point of low order as their public key, and then you’re just multiplying, you know, your secret scalar over, you know, the identity plus minus four or something like that. and it just falls out for you. , but he didn’t do that by default. He’s like, here’s how you do it. And it has all of these, you know, ponies that go with it. It’s very fast.
You don’t have to do all these extra checks. Cause only over the X coordinate, you don’t have to worry about, you know, all these other things and it’s complete. but I could definitely see a world in like 2010 or whatever, where someone tries to use these new whizzbang curves in Diffie-Hellman. and these low order point attacks just kind of fall out of it.
And I wish Thomas was here cause he would have something much more concrete, uh, to say about that. But anyway, I, I mostly agree with, with Sophie’s take, especially now because we have better complete formulas for these NIST curves, like P 56, they are still slower than the Edwards and Montgomery curves that have co-factors, but they’re complete and they’re nice and they’re still very fast and you can do the prime order curves.
You can do all this stuff. These curves are built or built into every single library, basically. Um, you can use them for the Bitcoin curve. but I really would like kind of like a holistic apples to oranges analysis of like say you have prime order curves that use these, maybe they’re incomplete, or maybe they’re complete and slower formulas versus the kind of vulnerabilities you get with these cofactor curves that are complete, but you have to deal with co-factors, kind of a systemization of knowledge of like, yes, you have different vulnerability classes, but like how game over are they? And how common do they show up? And how much work is it to mitigate them and that sort of stuff? I think I would like to see that. And if that exists, someone tweet me or ping me with it, I would like to read it.
David: I would just add that. I think the world, my point is, is that the world changed and that we got the complete formulas for the NIST curves, but I think also at the same time, the world changed them that now we have way more use cases are way more drive for the use cases where you would use these, these Montgomery curves and then, um, shoot yourself in the foot we did
David: So I think there’s two effects happening.
Deirdre: That is, that’s a very good point. It used to be very much like you have a certificate signature and you have key exchange in TLS. So that’s all for TLS or VPN or, you know, an app install that you verifying a developer certificate or something like that. And now we have a lot more things happening, uh, with these curves.
Then we originally w with both kinds of curves than we originally thought.
Okay, do we want to go on to the next one?
David: Yeah. I think Andrew Wally also from Chrome, um, makes a great point that, "nonces develop an attractive patina over time, similar to copper. So they should be in fact were used as often as possible." And, you know, I agree with this take and I think chip and Joanne do as well. We all love an attractive patina and some shiplap.
Deirdre: Oh my God. I know he’s trolling, but yeah, no. So a number, a nonce, is a number you use once, n- once, nonce, and there’s also a non term in the UK which is just like a dumb person or whatever, but no,
David: once as many times as you want.
Deirdre: okay. yeah. they should not in fact be reused as often as possible. Although it’s funny, is it really being reuse? I mean, it technically is, is it really being reused? If you are generating a new nonce every time you’re just generating the same nonce every time. Cause you’re bad at generating nonces, like all zeros or something like that.
David: Oops. All zeroes.
Deirdre: All zeros. All
David: is always for,
Deirdre: is that
an XKCD? I
David: yeah, that’s the XKCD get random number, chosen by dice roll.
Deirdre: guaranteed to be random: four.:
David: also have Lucas Meiers saying, "timing attacks don’t really matter." My amend to that is that timing attacks are either like, so incredibly trivial and obvious that like they poop the key out immediately, or they don’t really matter. That’s kind of, kind of been my take, but, you know, I never learned, how Spectre and meltdown actually work.
David: Chris Palmer would yell at me.
Deirdre: there’s, there’s the micro architectural, speculation attacks. That’s one battle and what’s, that’s one bag of bits, uh, literally. And then there’s like the, literally, do you have a constant time algorithm? And can I just like look at like, w like I can just listen to your laptop or I can, you know, look at how fast you respond and I can use that information, as part of a timing attack.
And like, like, if you look at a power analysis, one of the coolest things that, you know, usually if you slap some AI on a thing, you’re just like, okay, if I got whatever you just, you know, dumb, you just want to slap some AI on it One of the cool .Things from some other Google researchers is they literally trained an AI on power recordings of, I think one was AES and another one was like ECDSA.
And they could literally just feed a new one to the trained AI, and it would be like, oh yeah, here’s like 200 bits of this 256, uh, curve.
David: That did this? that
Deirdre: Yes. Yeah, yeah, yeah. And it’s just. Yes. It’s just, you know, especially when you were able to train, um, train an AI to do it for you, it used to be that you would get like a, like a power analysis or a recording, and you would just be able to look at it and be like, oh, look, that is like bit, one bit two, or like block one, block two or something like that.
And you can see like the bits just on the spectrograph or, you know, whatever, um, a oscilliscope graph or something like that. And just be like, look, those are the bits. It looks like it’s either a zero or one for, you know, 256 of them, uh, depending on your secret key. And it just falls out. So for that sort of stuff, I’m like, yes, it kind of does matter, but it does depend on how they’re measuring, if it’s so blatent that they can measure it over the network, you’re kind of in a shit position.
Sometimes it has to be over the local network. Not necessarily over the local network. Like, what are you doing? but you not, not necessarily need to do have a constant time algorithm to protect against that. It might be a different layer in your system that you have to defend it against that timing But when it comes to like, you have to put, you know, a sensor and slap it on the wall next to your laptop in the other office, from you to record this power analysis and then either look at it with human eyes or feed it into this trained AI to like, get the AES key out of it.
You have to.
David: literal locks.
Deirdre: Yeah, Like you need to look at your threat model and see, if that is within your threat model.
And I think when it comes to that sort of stuff, which is like, if you have an implementation of a crypto library and you don’t know. Where someone’s going to deploy it. And it’s possible that they’re going to deploy in a scenario where yes, those sort of attackers are within your threat model. Then you just sort of want to do a little bit of belt and suspenders, if you can afford it and you probably can afford it.
David: Yeah, you
Deirdre: of performance,
David: time, like L uh,
algorithms and curve operations
and so on. But
Deirdre: we can do that
David: I would be worried beyond that.
Deirdre: But when it comes to the like speculative execution stuff or all these micro architectural leaks that like what Spectre and meltdown.
David: Multitenancy is just hard
Deirdre: Yes. if, you’re worried about that in the cloud, like there’s a lot of other things you need to be worried
David: if, yeah. if you’re not, if you’re not, if you’re at the point where like, you’re worried about Spectre, because of multitenancy like, like if you can’t, if you’re not someone who can actually worry about Spectre and do something about it, you probably just shouldn’t be doing multitenant things
Deirdre: Yeah. Yeah. If you are actually worried about that sort of attack, like you should be racking your own hardware
David: Yeah. Like you’re someone who can deal with that
David: you need, you need to be spinning up a lot more machines.
Deirdre: And, uh, yeah.
for crypto implementations, you can probably belt-and- suspender it, for other sort of timing attacks one thing that I was very happy to introduce to my spouse, who is a computer architect and rants about yet another micro architectural leak paper, Spectre Meltdown came out and he’s like, oh no, there’s just going to be like a dime a dozen of these.
And he was right. Because it’s like other micro architects talking to other micro architects, be like, did you know that there’s a leak in the look aside buffer? And they’re like, yes, we design the look aside buffer. We know that they all are leaking like sieves because we designed the fucking thing to
give you a fast computer.
David: Yeah. On the plus side, all of those extra ones got a couple people I know and let them finish their PhD and go
Deirdre: so fine. See
and minuses. I was watching a talk by Alex Stamos, wages ago where he was basically saying like, here is the pyramid of bad things that can happen to people, you know, on the internet. phishing abuse, like spam, like, you know, tons and tons and stuff of like using systems as they are designed.
Um, but you know, using them badly, and like at the very tippy tippy top of the pyramid is like, You know, side channel attacks, speculative execution, zero days. Like the things they are extremely expensive to run, extremely fragile, and you have to be extremely targeted to pull them off. And so I was like, Hey, Nathan, someone agrees with you.
And he’s like, yes. And he’s been showing that slide in his own classes about speculative execution since out. And I’m like, Yeah. I mostly agree with you that, like, these are things we should not completely discount, but also they are very expensive to actually leverage against a target. And so we should keep all of that in mind when we’re spending our risk budget on things.
David: I think the more interesting question in that space is like, what do you do to address the kind of stuff and less about like.
what the next flavor of one is and that specific mitigation?
Deirdre: because they, they kind of look and smell alike.
Um, unfortunately the last person that I asked about this, he was on the attack side was not at all interested in answering that side of the question.
Deirdre: Hm, Hm.
Deirdre: Uh, last,
David: people are,
David: I believe they’re the micro architects, um, for the most part.
Deirdre: I have to give a shout out to CHERI, which I forget what the acronym stands for, but it’s basically trying to, not rebuild from the ground up kind of market micro architecture, implemented architecture implementations in a more secure, less leaky way. But everyone that I’ve talked to, including, you know, micro Arctic spouse says yes, CHERI is trying to do a holistic re-examination of how to do these systems in a less leaky way.
that’s not just sort of one-off one-off one-off one-off like, let’s just fix the look-aside buffer, it’s like, okay, you fixed like one leak out of, you know, a hundred.
David: I would probably get slapped, but I’d have to call out. I know people at Michigan are working on it and I know that like Matthew Hicks who’s now at Virginia tech has been working on like more secure, to timing attack micro architectures, I believe.
And like hardware mitigations for control flow stuff.
for a while. So people are working on it.
Deirdre: Cool. Very good. I mean, this is like over 40 50 years of like how we’ve been building processors and designing the implementation than hardware. And now we have to be like, no, no, no, no, no. Like you, you can’t do that anymore. And it’s like, okay, if we stopped doing all of that, like you cut your processor performance in like half.
So I was like, okay, now we have to figure out how to like, get it back without being leaky all over the place. So it’s like a, it’s a very fundamental change. And so, you know, Godspeed to all those researchers who are trying to, to chart a new path.
Another take, from Eleanor, uh, is their, their display name,
"Additive notation is clearly correct."
I don’t think this is a cancellable take, I think this is a correct take. this is coming back
David: I think it depends on whether or not you need to do uh exponentiation or not
like how many, how many, layers of operations are you doing? If it’s just one or two
David: once you’re at three or four, maybe you want to start at additive last. We’d be breaking out the up
Deirdre: Actually I was, I would be going in the opposite direction. Like if you’re, so just for a little context, like if you’re doing group operations in an abstract group, abstract algebra group, you can either represent the group operation as a plus. So like you add group elements together as the group operation, uh, this is additive notation, or you can represent it as you multiply, uh, elements together.
And this is multiplicative notation. , multiplicative notation is very common, when you’re doing classic finite field, Diffie-Hellman, where you have G to the a, and then you multiply it. Yeah. You know, and you take G to the, a, to the B and you get G to the AB and that’s your shared, you know, shared secrets, stuff like that.
David: And it’s popular because that’s the literal operation you’re doing. Like, you’ll, you’ll see both in like intro groups too. Cause if you look at like, uh, you might see like the additive numbers, mod p as well, like there’s groups where you add them together, where the operation is literally addition. And then you usually, it’s frustrating when you have an ad and the operation is actually just a normal multiply
David: of as multiplier, vice versa.
Deirdre: Yes. It’s also funny because, uh, additive notation is very common with like plain elliptic curve groups. So like the groups, the Ellipta curve map that we were talking about earlier, you do point plus point plus point plus point. And what we talk about scalar multiplication, because if you are adding a point to itself N times you just represent that as little N, times, big P and that little n is the scalar and we call it scalar multiplication.
So that’s additive notation. You can get into, if you do pairing based cryptography with elliptic curves, these, all these things called, like Miller loops and I’ve like done a Miller loop or you’ve used a Miller loop, like exactly one time.
And I was like, I am doing this right. It’s like the Tate pairing and like this, that, and the other thing. And it’s like between one group and another group into another group, and there’s a, they kind of need multiplicative notation because if you, if you start with additive notation, like the math just gets unwieldy so fast.
So I understand why they use multiplicative. but I lean towards additive for just regular schmegular, elliptic curves, but I, yeah, I’ve seen both. It’s a, it’s a very spaces versus tabs, Emacs versus Vim sort of thing, I think.
David: I lived in finite field a lot w finite field world a lot when I was doing this. So it was all multiplicative.
Deirdre: We have a follow-up from Sophie who says that tensor product notation is clearly correct. And I don’t even know what tensor product notation looks like, but Sophie has explained multiple times how elliptic curve math is like the obvious math that comes out of using category groups to do abstract algebra or something like that.
And she makes a very convincing argument that like, this is the like mathematically pure correct way to represent these operations. And I’m just like, yes, I nodding. I don’t really know what you’re saying, but it sounds really mathematically elegant.
David: all I know is that when the tensors flow, people get really excited,
Deirdre: All right. Uh, we’ve got, "11 is the best prime"?
David: That’s false. Nine is the best prime. Moving on.
Deirdre: Um, no nine is not a prime. and I think we got one more take though. We’ll go along with, uh, "my hottest take: it’s more likely that elliptic curve crypto/ RSA are classically broken, then we will ever have quantum computers capable of breaking them." This is, this is a good hot take.
David: my first reaction is no up to the elliptic curves and yes, to the RSA.
Deirdre: Oh, wow.
David: I think that, like, I don’t know that we’d get like a polynomial time algorithm for factoring, but I could see us getting enough practical
Deirdre: Like classical’s speed ups for
David: that, uh, like even 20, 48 at RSA, it
broken in the next, I don’t know, 10, 20 years. If it hasn’t been secretly broken,
Deirdre: Especially because, um, a lot of the lake, like field sieve algorithms use like lattices or something like that, like right. And I like, there’s so much research into lattices right now that that’s kind of spurred by trying to get post quantum cryptography ready to go, that uses lattices. And like, I think we talked to
Deirdre: a little bit about that, but
David: but we, we also know that there are like some people who are prohibited from publishing further research on factoring and lattices, um, because they did work with the government.
David: know that this is, this exists, whether that actually turned into anything. And that was a while ago.
Deirdre: Uh huh. huh.
David: so I totally not, not, not necessarily speculating that, like it’s already broken or something, but just that I totally believes that there’s speed ups on factoring.
David: elliptic curves, I’m more skeptical of.
In terms of, more likely, possibly broken than we’ll ever, at least for Les, let’s say at least for RSA, classically broken, then we will ever have for quantum computers capable of breaking. So one thing is that there’s so much pure materials, physics engineering that has to be like ironed out for real qubit, quantum computers, uh, that are sufficiently large sufficiently fault tolerant sufficiently error, correcting probably a lot, that maintain coherence for long enough to run your quantum circuit, however many times you need to run it to get your, your full distribution, uh, blahdy, blahdy, blah.
David: Um, I was talking with, with, uh, Nadia Henninger this recently, cause she’s on, I want to say an NSF, something about quantum computers. but there was an NSF report I believe like a year ago or so, um, about the feasibility of quantum computers and the main—, was that they don’t believe it’s possible
David: to develop one without the U S knowing.
So that’s the key takeaway, but like the it that Nadia drew it was uh, it was one that was not clear that like current methods are going to scale. That. Um, and to, uh, like, like you mentioned, and then the other thing was, if you look back at like the development of regular computers, you had a very nice like virtuous cycle where like you got, the starting computer, then you were not only were you able to use that to make better computers, but also there is like clear applications of like early stage what we would now consider to be very bad computers, either in scientific research or HR stuff, taxes, like all that stuff. Um, all of the used, punch cards for, and then, you know, up through Unix, multi-user the big rooms, the smaller, all of these things fed into each other.
And like there were problems that existed in the world that like, could not be solved and then could be solved when you created one. And the problem that quantum computers have is that all of the small ones are worse at solving problems than regular computers already are. So there’s like no virtuous cycle at all, because you can’t do anything with a quantum computer until it’s the one that lets you do everything.
Deirdre: I will, uh, play, I hate saying play devil’s advocate, I will say that for the very beginning of, , regular digital computers, they were competing with humans doing math manually, and, you know, with, you know, a calculator and an, you know, an Abacus and, you know, people doing algebra and calculus and stuff like that, with their minds and pencils and rooms of them.
So like the Manhattan project had rooms of ladies who would do calculations after, you know, a day of dude’s doing physics in a room. It’d be like, ladies, hello calculators, please do some math for me. And then they would, they would spit out some math. Up till a point, those women were better at doing math than the computers until they weren’t.
So I see your point. I see the general point of like, well, they can’t even like, what can they factor? It’s like, oh, the 55 qubit, quantum computer can factor 15. oh man. Like, that’s so exciting. It’s like, yeah, well, you know, the early, early digital computers couldn’t really do much of anything comparable to, either a very good human computer or, um, or a room full of them for a while.
The kind of thing that I kind of want to get to is that there’s a thing that a lot of people don’t really see as like the kind of like third horse, which is that we have very large data centers. That simulate quantum algorithms, they’re classical data centers. They’re very, very large.
And they are used to simulate quantum computers, running quantum algorithms, and they are quite competitive. So like the, um, I think it was the Sycamore paper from Google where they’re like, we’re able to do something, uh, with our, you know, physical quantum computer that is, you know, a similar comparable classical data center trying to do the same thing only can’t just do as good as us anymore.
And like, we have tons of classical compute and we can get good results out of trying to simulate quantum algorithms with like huge amounts of classical computing. And that may be true for a while.
David: Yeah, but it should be very clear. Like what speed up you get there, right? Like you take your quantum algorithm and you apply a standard transformation that takes some amount Of Of algorithmic complexity to turn it into a final algorithm, but complexity and, you know, some algorithms would better or worse, you know, and then the constants are going to be sneaky there, but like, it should be pretty obvious.
David: what’s a good candidate for that and what’s not,
David: way it’s going to be everything.
Deirdre: Yeah. And like, you know, some of the target problems that people like on the computers for like protein, folding, complex weather simulations, nuclear simulations, things like that, that we have traditionally used, like major, major large, you know, , supercomputers or huge data center compute to do, , may not translate well to a quantum algorithm simulated on a classical data center.
, I don’t know enough about that to, to say, but I will say that for the near term while we still have these, , I forget the term, but like the noisy something, uh, intermediate scale quantum computers, which is like what we’re going to have for the next, I dunno, 20 plus years, , we still are worrying about these massive amounts of classical compute who can still do things very, very quickly when trained on a problem that we consider to be, , so far, resistant to, some of these attacks.
Deirdre I’m being told
that real-world crypto is in fact not a really bad MTV reality show, and that it’s happening soon. And in fact, you’re in another country on
Deirdre: I am, and I will, I will clarify that there is, real-world crypto. There’s also the rugby world cup. I am not in the country for the rugby world cup. So when you hashtag RWC, you have to be careful. You have to caveat it and spell it all out. Real world crypto. Yes, I am on my first international trip since COVID times.
I am stoked. I am an Amsterdam for real-world crypto. I have live tweeted real-world crypto for several years running. so for the first time, in a while, live from Amsterdam, it’ll be real world crypto in a couple of days.
David: Yeah, so everyone should follow Deirdre
Deirdre: yeah. , I am stoked. We have, I’m stoked to see people in real life, , that I’ve only seen online for several years.
Um, we also look like we’ve got a pretty cool program. , there will be online components to real-world crypto, which is nice. not just in-person component. for the people who are following along from not-Amsterdam.
all right, so there’s three days of stuff. So we already talked about side channel attacks. There’s a session on side channel attacks, including Spectre, declassified. so we’ll see if there’s any updates on. Spectre. , I’m not really sure what’s going to be declassified about that. they’re not that hard to mitigate, "what cryptographic library. developers think about timing attacks"
I wonder what that’s about. " linear ears, passive remotes, physical side channels on PCs", which sounds almost exactly what I was kind of saying, which is literally you put up, you slap like a E M sensor on the wall and like near your computer.
and you have, this is like a physical side channel.
A session on symmetric cryptography, uh, heavyweight versus lightweight,
okay. "Rugged pseudorandom permutations and their applications". I don’t know what that means.
We’ll find out. yeah, I don’t know what rug it
David: on a Panasonic Toughbook.
Deirdre: I see. I— cool. We’ll learn some tips and tricks, from that session or from that, uh, talk. Cryptographic attacks on privacy.
David: The first one is a legal
Deirdre: oh boy.
Deirdre: Yeah, that’ll be fun. , and it will be fun because it’s like this, this plays into the threat model of what you’re trying to deploy, all about that data towards a practical assessment of attacks. Un-encrypted search. Cool. Suny Kamara does a lot of research on this. I’m interested. " privacy attack on Swiss post e-voting system". All these e-voting systems. Like I, you know, it’s hard to do,
David: you know, who else has an e-voting system.
David: The IACR?
Deirdre: it Helios?
David: Uh, I don’t know. I think it’s just like, uh, it’s like HotCRP uh, like, just like a website, right?
David: you can make fun of it, but at the end of the day, it just doesn’t really matter.
Deirdre: yeah. " exposure notifications private analytics", I am very interested in this because this is the, apple and Google, uh, exposure, notification, protocol that they deployed that you may be using. If you install your local, , health departments, like COVID notification app, how do you deploy that and have observability into how it is performing in a private way?
David: I think the, uh, the question of whether or not that ended up being helpful, it was very, it’s interesting. I wonder if they have anything to show about that?
Deirdre: I think I’ve seen.
David: a couple of those exposure notifications and there was not enough information on them to make them actionable.
Deirdre: Yeah. I think it was like where, where they were deployed in countries that had uh, their outbreaks under control to a level. Uh, not like the U S they were somewhat helpful,
think, I think a lot of us.
David: had access to testing
David: by the time. I started getting them, like we had enough rapid tests.
Deirdre: Yeah. That’s when they that’s when it actually, because it’s like a tool in the toolbox. And if you, if the rest, if the rest of your, uh, your pandemic response is just sort of like shrug emoji, you know, even some of the best, intentioned best design tools can only go so far.
" standing up MPC for privacy measurement". Yup. "Oblivious message retrieval", shout out to Z cash researchers. this is, uh, Eran Tromer is, um, uh, worked on Z cash. , I work on some Z cash stuff. Now this is a cool technique to try to use a fully homomorphic encryption to, let you figure it out. When you have encrypted payments on something like a blockchain without having to try literally every single encrypted payment to see if it decrypts to your private key, because there’s no other, because it’s private.
There’s no other way to figure out if you’ve got some payments on the blockchain without leaking some data about who you are and whether you’ve gotten some payments and stuff like that. On Thursday TLS stuff, "justifying standard parameters in the TLS 1.3 handshake". Okay. Uh, "alpaca: application layer, protocol, confusion". I feel like we’ve heard about alpaca.
Oh, is that new? No. Hmm.
David: Maybe this was at black hat at some
Deirdre: Oh, yeah, that sounds familiar. compression from Mike Hamburg. Cool. , "useful primitives commit acts of steganography before it’s too late". This is Matt green and some other, oh. And, uh, one of— his student Gabriel. I’m interested in this. I don’t know what it’s about, but I am interested. , "password based key exchange storage, hardening me on the client server setting". Okay. I know that a lot of people are into PAKEs and they want nicer, better pAKEs so this is chip and crisp. , I’m. I am interested to hear more.
Huh. "Rebuilding metas ad stack with multi-party computation", from Meta, uh,
AKA Facebook. Interesting.
Ooh, secure messaging, secure messaging authentication against active man-in-the-middle attacks.
Okay. Continuous authentication and secure messaging. This is interesting because this is basically like in your signal setting, you verify your public keys with each other, maybe the first time you do your long-term public keys. , and then like the way that the signal protocol does this, they do like a triple Diffie Hellman. And you mix in contributor material from your long-term keys and your ephemeral keys and it all gets chained together and you do hashing and things like that.
David: And we all know three times the Diffie Hellman, three times the security
Deirdre: sure. but like, you know, rarely rekey, in that setting, you just have like a long, long, long lived thing, um, with new ephemeral keys and stuff like that.
So it’s interesting if they’re, they’re trying to like analyze if you’re updating continuously or whatever they’re actually analyzing. Oh yeah.
"An evaluation of the risks of client side scanning". We did an episode on Apple’s client side scanning proposal,
David: with Matt green, who might be giving this as one of the people
Deirdre: Yes. , so this may be more about that sort of thing.
Not just the apple one, but probably, , strongly informed by Apple’s, , technical proposal that they have shelved. And we have not seen any evidence that they are, , taking it off the shelf, uh, in the past, I dunno, six months,
"Four attacks and a proof for telegram". Oh boy.
David: one of the attacks it’s in plain text mode?
Deirdre: One of the attacks is literally everything is plain text except for when you like click five times into the, send a private message channel. So, uh, "making signal post quantum secure", I am always interested in this. , I think I looked at this paper, when it first came up, but I have not read the whole thing.
So I’m interested in this. let’s see. Ha, "trust dies in darkness, shedding light on trust zone, cryptographic design".
It always says.
David: did we know? More micro architecture side channels. We can only assume.
Deirdre: But it’s not just microarchitectural side channels such as this, just like, can you do SGX, and can you do trust zone in like a, like actually secure manner? And it seems, it seems very, very hard. And so, um, you know, one more chink in that sort of armor of beyond the sort of like speculative execution side channels or look-aside buffers side channels. "On the insecurity of Elgamal and open PGP", like stop, stop using open PGP?
Like, you know, this is, this might be an interesting analysis, but I’m just sort of like, why, why like, I, who was still using it, please stop using it. Like, can we help you? Can I offer you a signal in these trying times? And like it’s the answer is that does not always fit a scenario where you want to send, say an encrypted email. But, you know, like, can we talk to you? Can we, can we help you?
" Don’t break the web: APIs for Chrome’s privacy sandbox". Oh boy discussions, other things, a concert by a band I’m not familiar with. , and then the third day, "quantum resistant security for software updates on low powered networks embedded devices."
Cool. All right. We need secure update channels. "Drive quantum safe, uh, postquantum security for vehicle to vehicle communications."
David: if Vehicle to vehicle communications, is really weird cryptography. They use like, quote unquote short-lived certs. I mean, they, they all, they’re all using like curves and so on, but like the design of the protocols, this is just very odd. And they’re all those protocols that are, they’re like IEEE stuff.
So they’re all the standards you like to pay money to access. So most of the people that you would stereotypically think of as being in the cryptography community, aren’t really paying too much attention.
David: eff has an article about this somewhere.
Deirdre: cool do car people. Just not think that the internet, what does IETF stand for?
David: I mean, no, I mean, the car people have been doing IEEE standards for
Deirdre: Oh, they
David: IETF has existed. So
Deirdre: So they just sort of were like, oh, we’ve been doing this for a while and they just ignore IETF crypto stuff?
David: Well, they’re doing standards for communicating between cars. So
it’s not clear that like, you wouldn’t want to bring the IETF in. And
Deirdre: Ooh, this is I’m interested in this, "surviving the FO-calypse". Uh, I’m pretty sure F oh, here is Fujisaki-Okamoto. I’m probably saying that wrong, transform: securing PQC implementations in practice. So for the NIST post quantum competition, a lot of the, possible candidates for replacing key exchange, didn’t fit into a Diffie Hellman-esque key exchange, API basically, they fit more into something that’s called a key, uh, encapsulation mechanism or KEM.
one way to make your thing look like a KEM is to apply this fo transform. and so almost all of these KEM candidates applied this transform to make it fit into this KEM API. and apparently, almost all of them did it wrong or badly or insecurely.
Deirdre: this is why it’s like the pho cop
David: what’s, what’s also kind of funny about this talk is this, this, the people listed here are all, from NXP
David: NXP also makes the chip makes a bunch of the chips for vehicle to vehicle communications.
Deirdre: How convenient
. All right. cryptographic translation, threshold crypto, and zero knowledge stuff. , Cool review on arkworks , which a lot of zero knowledge proof rust programmers use.
It’s a very nice ecosystem if you’re just trying to prototype your thing and you need some, , fields, some groups, some curves, stuff like that.
That’s where you go for arkworks, , "decentralized private computation on aleo". Cool. "snark pack, practical snark aggregation", coauthored by Mary Maller I’ve worked with, Uh, with the cash stuff. "Zero knowledge middle boxes and punctual encryption",
" Fine-grained approach to forward secure encryption" and more, cool.
I’m looking forward to it yeah. Looking forward to an in-person component for real world crypto. And, uh,
Deirdre: on the internet tweeting about it perhaps. And maybe I’ll actually remember some of it.
David: Real-world crypto is great. I strongly recommend attending. it’s probably the most approachable cryptography focused conference that I know of. whether you’re approachable from the sense of coming from industry or even approachable just from if you’re normally in say the non cryptography security community. that was the one that I would try to go to back when I was doing academic work. And I’d probably be there this year, too, if it wasn’t in Europe.
Deirdre: yes. I don’t know when the next one is going to be, but it might come back to the S— it, it rotates.
David: has rotated west coast, east coast, Europe every three years.
Deirdre: Yeah. So I think the next one next 20, 23 will be, if in person we’ll be in north america, so yay.
David: the bay area.
Deirdre: Nice. , I know they record stuff and they put it online. Um, you can attend remotely, and login, and that was how we did it last year. And that was fun too. so you can do that and I think you have to go to the IACR website and, uh, and sign up and it starts on Wednesday.
David: zones make that a little difficult, but
David: to hop in for a session or two
Deirdre: yes. yay. okay. That’s it. That’s all I got.
David: All right. And then we have merch available, so don’t forget to like subscribe and buy some merch.
Deirdre: Well, the verge, right? merch.securitycryptographywhatever.com is up, you can buy hoodies, mugs, stickers, other things.
David: out of one of the mugs right
David: I can testify that it’s great.
Deirdre: Yes. and I have a fresh security photography, black zip hoodie to add to my, not at all large collection of black zip hoodies. so if you would like some merch, uh, you can get some nice merch and we’re vaguely working on more different types of merch besides, , black things with purple writing, , and cool art that evokes, the hell that we find ourselves in and trying to create software.
but yeah, merch.securitycryptographywhatever.com. thank thank you. thank you. Thank you.
David: use fly.io for all your hosting needs and pinboard.in for all your bookmarking.
Deirdre: We’re not sponsored. We just like them. Okay. Bye. I’m hanging up now.