Cruel Summer: hybrid signatures, Downfall, Zenbleed, 2G downgrades

Cruel Summer: hybrid signatures, Downfall, Zenbleed, 2G downgrades

We’re back from our summer vacation! We’re covering a bunch of stuff we saw and did:

Links:


This rough transcript has not been edited and may have errors.

Deirdre: Hello, welcome to Security Cryptography Whatever. I’m Deirdre.

David: I am David.

Thomas: I am… Thomas? I’m Thomas.

Deirdre: You are Thomas,

David: We sure hope.

Thomas: Pretty sure.

Deirdre: Coming live from your new studio.

Thomas: It’s true. I’m in my new place. I’m very happy.

Deirdre: Cool. Uh, hi. We’re back. Back from our summer vacation and we’re gonna go tell you what we did on our summer vacation. It involved things like Black Hat, and DEF CON, and CRYPTO, and other things. So, this is mostly us talking about all of those, and maybe some other stuff that caught our eye, since we last spoke. Things like Pixel attacks, why you should deprecate 2G, why you should write your modem firmware in the memory-safe language, and why it’s annoying to call everything that uses classical crypto and post quantum crypto ‘hybrid’, even if that means a different thing in every single setting that you use it in. Anyway!

So we went to Black Hat, and by we, I mean myself and David. What did you think of Black Hat this year? David?

David: Mostly I just missed Thomas.

Deirdre: Yeah.

David: Thomas is distinguished enough that he doesn’t need to come to Black Hat. People come to him instead.

Deirdre: Mm-hmm. He almost lured us all to come to Chicago instead of Las Vegas in August.

Thomas: I have a giant new front porch. We’re gonna have front porch con. People will show up and be on my front porch. It’s gonna be great.

David: A good chunk of my Black Hat was spent doing Cabana Con, at one point showing up to the wrong cabana. It’s very easy to tell which cabana’s associated with security people.

Deirdre: Yeah, maybe, maybe next year it’ll be Porch Con instead of Black Hat.

Thomas: You guys were suckers and you guys both went to disease con.

Deirdre: Did not get a disease.

David: Did not got sick because we wore masks in the situation so that it made sense to wear masks in.

Deirdre: I was also just not around a lot. I was going to and fro a lot and, uh, but yeah. I got, I got Covid last year at Vegas. Uh, I did not get it this year. Magic. Magical.

David: Also, in early June, I made up a pre-existing condition and went and got another booster.

Deirdre: Oh, hell yeah. You’re smart. I wanted that and well, I’m waiting for the new, new one, the updated one. Anyway. But yeah, we did not get diseased at Disease Con. It was nice.

Thomas: My, my, the question I was building towards was, did you see any talks that were worth seeing?

Deirdre: I liked a couple of Black Hat talks. So one of the headliners was yet another, processor micro architectural vulnerability, specific to Intel processors called Downfall.

Thomas: Okay. Before you go there, before you go to Downfall,

I, because the, I have the, I have the chronology for this stuff vividly. This is all what we did on our summer vacation, and I’m gonna keep the chronology here. So there’s, you’re talking about Downfall.

Deirdre: yes,

Thomas: Downfall was, it was like a, it was a registered Black Hat talk.

Like it was like the reviewers reviewed it and all that. I think we knew it was coming. I think so. Right. Um, but that’s Downfall hits Intel processors.

We have some Intel processors where I work, but we have, um, quite a few more AMD processors,

like most of our fleet is AMD Epic.

uh, because of what I think, I’m not sure, but I think was an embargo, snafu, um, Tavis Ormandy found probably, I think maybe credibly the all time greatest microarchitecture—

it’s certainly the, the, in my opinion, the best microarchitecture attack ever,

Deirdre: Uh huh

Thomas: on AMD hosts called, they called it Zenbleed for the Zen architecture, which of course happens to be most of my server fleet. Um, So this is like, I think like two weeks before, uh, the Downfall thing came out at Black Hat or whatever, there’s Zenbleed. And Zenbleed was amazing. So you, you’ve got the Downfall thing. We should, we should talk about Downfall, although, like, I, I sort of barely understand Downfall. Um, but Zenbleed ed is, is Zenbleed is freaking awesome and also horrible, but

Deirdre: Yeah. Alright, we’re gonna cover Downfall really quick because I think Zenbleed is probably more interesting. Um, Downfall is basically on Intel processors. They have these gather instructions, and when they’re, it’s, it reminds me a lot of like garbage collection, but like speculative execution, garbage collection.

So like you did some speculative execution and then the processor under the hood is giving itself an instruction to like clean up some shit it allocated on registers that it needs to get rid of. And it turns out that you can run like a co-thread, uh, or hyper thread on one of these cores, and you can maliciously leak information based on what is being gathered up by the sort of cleanup instruction, that is only on Intel processors, that’s only used for its vector instructions, it’s SIMD instructions, but also for its AES-NI and SHA-NI instruction sets, which. Yes, those are, it’s like built in supposedly. I don’t know. I think most people think those AES-NI instructions are constant time and not very leaky. Turns out they are leaky on intel.

Um, they released some micro patches. It’s only Intel. it’s annoying if you have any like super optimized like targeting backends, finite field arithmetic implementations that will like, be really fucking fast on Intel with vector instructions or, you know, some weird intrinsics you have. You might have to be careful about it.

If you’re a cloud provider, you turn off hyperthreading or you turn off if you have a VM provider, you turn off that sort of stuff if you have Intel processors. But that’s it?

Thomas: So I’m, I’m gonna say this just so somebody can, after they eventually hear this, can correct me about this or set me straight. But like, one thing about this is, um, Zenbleed was released like I think maybe by accident a couple weeks before, Downfall was published and both of them were, were released with exploit code or with proof of concept code or whatever you wanna call it.

Right. Um, well in Zenbleed’s case, let’s properly call it exploit code, ‘cause it was exploit code, right? Um, it, it really worked. He just ran the Zenbleed ex, you know, the, the ze bleed, exploit exploit thing and everything that was hitting strlen or strcopy anywhere in your system, was showing up just in text on your screen.

Right? So Downfall also had, proof of concept code, but all of the code that I read and in the paper they make reference to this, relies on pteditor, which is like a kernel module, um, which you would not be running.

Deirdre: Right. Okay. Hmm.

Thomas: So I, I’m not totally clear on which— I assume that this is just to set up the test environment and make things, you know, obvious.

But like they, they released a proof of concept thing that involved installing a kernel module. Um, Which is like, okay, I’m just not gonna install that kernel module, which obviously that’s not like the whole attack, right? But like, I don’t fully understand, how exploitable Downfall was? In particular, just how situational Downfall was?

So like if you, if you read the Downfall paper, they talk about like finding gadgets in the kernel, which is like a, a classic like Spectre type specx thing where like, um, You know, there are particular places in the kernel where you can, you know, pick up leaked data from right? Because of, you know, vulnerability to speculative ex execution.

It, it seemed to me, reading the Downfall paper, that Downfall kind of followed the same pattern, like you’d need to know what you were targeting. Um, and

again, Zenbleed, you just ran Zenbleed and you can see passwords on the screen.

Deirdre: Yeah, Downfall, you definitely have to target a victim process or thread, and you want to try to be co-located or, you know, co hyper-threaded on the same core to try and get the leaked stuff from your victim co-thread, basically. yeah, you need to know, you have to target something specifically to get some bang for your buck as far as I understand it.

So yeah, Zenbleed sounds a lot worse.

David: Zenbleed is leaking a wide register, whereas Spectre and Meltdown, and I think Downfall are leaking like bytes in memory sometimes, right? Like, like

Deirdre: I don’t even know if they’re leaking in memory. I think they might be leaking in registers on the

Thomas: For, for ble? Yes. It’s directly off of the registers, so.

David: Yeah. But, but for like Spectre and Meltdown, you’re effectively leaking, like from memory and, and Downfall, is, is Downfall a register or is it memory?

Thomas: No. My understanding is that Downfall, if the gather instructions in the, like the micro ops that implement, gather,

or however gather is implemented under the hood of the microarchitecture, that what’s happening is the CPU um, uses like a temporary buffer, so what you’re doing is you’re doing like a non-contiguous read, so you’re like picking up bytes from random places in memory or, or whatever, right, and then assembling them into a single read. The CPU is allocating a buffer to do that, and that buffer is aliasable, is my understanding. Um, so I’m, I’m, I’m really fuzzy on Downfall. So again, somebody just calling me an idiot online and tell me what the, what the right way to think of this is.

But that’s, that, that’s what I understand to be happening, right? Is

David: Mm-hmm.

Thomas: the, the leak here is in a temporal buffer that’s being used, to kind of add up the bytes that you’re trying to read, non-contiguously.

Deirdre: Yeah,

Thomas: Ze is hilarious.

David: well this is act, I think, somewhat counterintuitive then, but like, I think like leaking a register is just way worse than leaking memory. At least if it’s like a big register. Right? Right. Because like everything is going through that’s, you don’t have to like find stuff. Whereas in with like Spectre and Meltdown, you have, you do this whole process to like find specific parts of memory and recombine things.

Whereas like if you leak a a 24 byte register, that’s like, um, always has, that’s primarily used for like comparing contiguous bits of chunks of memory. Like yeah, you’re just gonna get. Useful stuff for free, whereas you had to put effort in

Thomas: So

David: this.

Thomas: Ze Bleeded is, yeah, Ze Bleeded is a, is a register file use-after-free. Like literally that’s what it is. So You know, if, if we’re thinking in terms of writing assembly instructions, there’s no such thing as freeing, right? The the analogous thing that we’re doing with registers is clearing them, like we’re writing zeros to them.

And so like the, the ‘freeing’ here is, um, as an optimization, when you zero out a range of registers, the architecture, like the AMD architecture will set a flag, saying these registers are, are now zeroed out. They’ve been freed, right? Which means that something else can just write to them because, you know, the architecture’s just gonna assume from that point on that whatever’s there, it might as well have been zeros. Right?

So, the registers are freed. They get used somewhere else. Something else aliases that, that those same underlying physical registers to, you know, some logical register somewhere, right? They get written to just like any use-after-free attack, right? But the thing that freed them was executed speculatively.

Right.

Deirdre: Yeah.

Thomas: you mis-predict the branch, you roll it back and they didn’t catch all the corner cases for it. So there are places where you can roll it back and whatever data was written into the, the, the XMM registers is still there after the, uh, the MISP Predict rolls back, the zeroing out. Um, so it’s like you’re, you’re basically, you’re tricking other processes into writing.

Inter registers that you’re gonna get to see. Which is awesome. Just a really awesome bug. The, the point was made on the orange site, that OpenBSD for instance, so the here, the huge problem with Zenbleed is that modern libc’s all vectorize their string instructions, right? Because these are all operations where you’re doing things with a bunch of bytes at the same time that most of them vectorize pretty nicely, right?

David: That is a good thing to do.

Deirdre: Yeah.

Thomas: so it makes a lot of sense if you’re dealing with a string and you have like, you know, registers that you can load 16 bytes at a time into, you just load the string into it and then do vector operations on it, right? So all of the libc’s string and buffer operations for Microsoft and glibc go through these xm, the via the AVX registers, whatever they’re called.

I always forget what they’re called, right? But like, The Y X M M and Y m M registers, they, all of the string data that you’re dealing with in a program is gonna go through those registers. You basically get to like sniff every string that’s going the CPU, um, which again, like you run the, you, you type make and you get like a little Zenbleed executable and then you type dot four slash Zenbleed and there’s like passwords on your screen.

It was amazing.

David: Yeah, it seems like, like way worse than like Spectre and Meltdown were.

Deirdre: I

David: in terms of like actual security impact

Deirdre: well.

David: and, and I think that’s because it’s hitting these registers and not hitting, like, memory. You don’t have to target anything. Everything’s coming to you. You just sit there and you wait.

Deirdre: I think it’s fixable though. Like you literally just have to recompile your software, in theory, and you just say, no vector instructions and you’re, you’re done. You’re you.

David: Well, that’s not really a fix.

Deirdre: No.

David: my computer off.

Deirdre: Yeah, but like as opposed to Spectre Meltdown, where it was like, what do we do? It’s like, well, you could deploy process isolation for all of your software.

It’s like, that’s really hard. It’s like, okay, well we have to fix all the processors, or you know, deploy. Yeah.

David: I mean, aAMD id a microcode update, right? Like it,

Deirdre: Yeah, so I think, you know, yes, it’s very bad, but if you can do a quote software fix, which is recompile by telling your compiler, do not use vector instructions, and you get a little bit of a performance hit, your mileage may vary what little bit means to you, but that’s tractable as a remediation.

Whereas Spectre and Meltdown, at least, you know, for the first year of it was just like, What the fuck do we do? And it’s like, uh, you know, Chrome: deploy process isolation, uh, you know, add retpolines or, you know, whatever, uh, you know, have this very bespoke little gadget to protect your, uh, your compilation. That’s not as easy.

So

David: well we, we did manage to deploy, like process isolation too, right? Like Chrome did this. Chrome had already been working on it. Site isolation was in flight, and then it got accelerated and released with perhaps a larger performance impacts than it might’ve, uh, if it like, just went at the regular pace of things.

But, but all of that has been like since clawed back, like,

Deirdre: Clawed back.

David: Yeah, I, I, I mean, uh, uh, all of the performance impact of site isolation has been mitigated by future performance improvements. However, every time, uh, Chrome gets like 10% faster, users open 10% more tabs and websites get 10% slower. And so everything just feels like it gets worse over time.

Deirdre: But yeah, I’m just making the point that that’s not an easy lift. That was like a major architectural endeavor for your large software applications.

David: Yeah, but also the only software that it applies to is like web browsers, and they did it like, even like Firefox and, and Safari have done various forms of this as well. Uh, I would say like that’s largely mitigated. I can, I can feel like three platform security and web platform security people staring at me sideways while I say that.

But like, I.

Deirdre: Yeah. But yeah, it’s a lot.

Thomas: So what else? What else? What else was a black

Deirdre: else did we do? Um, okay, so the other one that uh, really tickled several of my favorite things was, uh, the Android? I think it was the Android Red team was like, oh, we found that the PIxel modem, firmware, I think it was the PIxel 6 modem firmware, had an out of bounds or out of band memory error, or two out of bound, uh, two CVEs.

And they were able to leverage these two with a malicious, uh, mobile base station to force a downgrade to 2G mobile security. And using these, memory safety vulnerabilities in the modem firmware, with the insecurity of 2G as a protocol, they’re able to like shove several, I think like 200 bytes, 256 bytes, arbitrarily into the heap and overwrite, uh, stuff in the heap, and able to just like get a full on foothold in your PIxel.

And so then when you switched off of your malicious base station to a, you know, a, a reliable, trusted base station, they, still had a foothold on you. And so in their demo they were able to be like, oh, you’re trying to reset your Twitter password or X Twitter password. We’re able to intercept all of your, you know, security codes and, and everything.

We’re able to overtake your account or, you know, that was a very, um, in my opinion, low stakes demo of what they would be able to do when they have like a full on foothold in your PIxel device.

Thomas: So this is like, this is a vulnerability in the baseband.

Deirdre: Yes, yes. But they were able to force by setting up a, uh, a malicious base station. They were able to force you to downgrade to 2G, and that’s how they were able to exploit it.

There were two, like two CVEs and they were out of band or whatever. And at the very end, like the whole talk, I was like, so you one, yeah, 2G’s bad, but like you wouldn’t have been able to pull this off if this was like, you didn’t have these memory safety issues in the firmware implementation of the modem.

And at the very last slide of their presentation, they’re like, yep, we’re experimenting with rewriting this modem firmware in Rust. I’m like, yay.

Thomas: So hold, hold on a second. I’m just, I’m just trying to get my

head around it. Right.

Deirdre: yeah. Yeah.

Thomas: they’ve got code execution in the baseband on the PIxel phone. What does that, what does that directly get you?

Deirdre: Um, it just

Thomas: How do you go from there to Twitter is, I guess my question.

Deirdre: Yeah, yeah, yeah. I’m

David: Uh, I think you mean x.

Deirdre: Yeah. Ex Twitter or, yes. Uh, attacker fully

Thomas: from there to Twiter?

Deirdre: The attacker fully controls up to 255 bytes written into one byte buffer on the heap. Allows us to overwrite heap header of the next adjacent chunk with fully controlled data, uh, allow them to write a limited number of controlled bytes in the heap and corrupt adjacent e objects.

Thomas: in the baseband, yes. But from there they’re getting, what? Are they just like, can they just watch 2G S M S messages or something? Or

Deirdre: I think so because I forget if it was SMS or if it was, uh, TOTP Mu, I guess it was, it must’ve been s m s I forget from the demo. Um, but yeah, they, like, they were just getting an authentication challenge that was not FIDO and uh, they were able to intercept and they’re like, ha, I have your Twitter account now.

David: If there’s like any memory mapped io between like the baseband and the operating system, you probably can effectively create a use-after-free in the kernel. But I don’t know the, uh, what the interface is there, but

Deirdre: I don’t

think they went that

David: doing evil things with it, like, I don’t know.

Deirdre: Yeah.

Thomas: My antenna went up because it occurred to me that we’re talking about a code execution vulnerably in the baseband, and this is like just a classic message board trope, which might be more true than I thought it was. Right. But like the, the, the idea of the baseband being compromised is part of the design threat for the threat model for both the PIxel and the, you know, like an iPhone or whatever.

Right? Like they assume the base band can get popped.

Deirdre: Do they?

David: I don’t know.

Thomas: Yeah, so the, on an iPhone, the baseband is like a USB peripheral. It’s not USB, it’s H six, but H six is just, it’s just on chip USB. Right. So there isn’t any shared memory there at all. It’s, it’s, it’s a peripheral. So in theory, if you pop the base band on an iPhone, all your, I mean, you’ll get control of the base band, which is why I’m wondering if that’s why the target is Twitter is be, or anything that does like, you know, phone system based authentication is.

‘cause sure if you do that, you can, like, you’ve got control over its connection to the phone system, which is very powerful, right?

But you can’t go into like, you know, Twitter’s process memory and go read things out of it.

Deirdre: Yes. It didn’t seem like it was that, it was very explicitly we’re doing an authentication challenge over a, well, phishable challenge credential, not something coming in a a completely different way. And it might’ve been, yes, it was SMS.

David: think you’re right. Uh, and I think an Android does the same thing too. And I can, I, I think I can think of specifically who, um, is responsible for that. And it’s probably like in between the time that I said I was wrong and when I said that originally, um, actively mad at me.

Thomas: But this is just in keeping with our theme of being aficionados of really effective downgrade attacks,

Deirdre: Yeah.

Thomas: from 3G to 2G and then using that to, you know, tickle a memory. Crops and vulnerability is high quality, not as high quality as if they somehow manage to tunnel a 3G secret through 2G and then get the 2G thing to use that secret somehow that exposed it to everything else, but a close second to drown.

Deirdre: yeah.

David: Yeah, I, I was gonna say, we all know the best downgrade attack is going from TLS 1.3 to 1.2 to SSL V3, to SSL

Deirdre: V2s. I mean, you’re, you’re laughing, but Yes.

Thomas: I’m not laughing ironically, I’m laughing appreciatively

Deirdre: Okay. But yeah, I like that one. It was fun. It,

Thomas: Cryptography talks at Black Hat this year. So somebody, uh, extracted keys by looking at fluctuations in your power lights.

Deirdre: oh yeah.

Thomas: Neither of you saw that talk.

Deirdre: I did not

David: I did not see that one. Um, so people in grad school when I was still a grad student, tried that and, um, and just absolutely failed. And then like also might have accidentally DOS’d the inter, the Lake network connection for, for Michigan. Um, but I’m glad that someone figured it out.

Deirdre: Yeah.

David: They were, they were kind of trying to do the reverse thing.

They were like, if we scan the internet and then we point a camera at, uh, ethernet port, can we figure out like what this thing’s IP address is?

Thomas: I, I like that paper too. That’s very good. But we’re all, just from now on, going to assume that if anybody can see our power LEDs, they can also read every string that’s going through our, you know, X M M registers.

Deirdre: I mean, the only thing that blinks on any of my computers is the tiny YubiKey nano that’s sticking outta my computer, and I don’t even know what those, those lights mean. So,

David: the one I have blinks like blue and red when you need to tap it. Otherwise, I don’t think it blinks.

Deirdre: Uh, it just, it’s doing something and it, yeah, it aggressively blinks at me when they’re like, ‘TAP ME’.

David: Mm-hmm.

Deirdre: Confirm for your proximity. Human.

Thomas: And then we had two crypto talks at Black Hat that were about wallets,

Deirdre: yeah.

Thomas: MPC, TSSshock.

Deirdre: Yeah, there was a threshold, threshold attack on, I think it was ECDSA threshold, which is a much more complicated threshold signing scheme than the ones that I have worked on, which are Schnorr and very simple and short, and they have different properties. So I I, it was funny ‘cause um, they gave this result, this research at a, uh, Oh, gosh, what’s it called?

A workshop at CRYPTO, uh, CRYPTO in Santa Barbara, which was like a week after Black Hat and all that. And it was very funny because, uh, I, I remember seeing it for the second time and being like, oh yeah, I know these guys. I didn’t know they were gonna be here. Um, and then all the cryptographers in need, like, kind of like crypto attacks workshop walked up and they’re like, so what do you recommend we do to like, Protect against these attacks, and they said, oh, I have no fucking clue.

Like I just, I just found the attacks. I’m not, I don’t have any suggestions of what to tell you to fix your crypto protocol to make a movement. I, if I recall, it was just sort of like we were able to observe, somewhere between like a dozen and a hundred threshold signatures, and that was enough to put together a forgery that would validate, um, or something like that.

And that, yep, that sounds about right. Like a lot of the, the attacks on threshold signature schemes are basically like, uh, the most naive ones, especially if you try to do, uh, thresholds, not, um, not necessarily the one that was presented, but other ones. If you try to do just like a naive approach to threshold signatures, especially with Schnorr and especially deterministic nonces, which people like for deter, for signatures, for reasons.

Just like you just do more than one and you just do a little bit of arithmetic and you fucking, you can solve for the, the private key, the signing key, it’s, it’s really ridiculous how you just get more than one threshold signature from honest parties and like a slightly not well built enough threshold signature scheme will just either spit out a forgery, you can compute or spit out the private key, uh, from a naive implementation.

So it was kind of like, yep, this sucks, but this is how they classically fail. So that was fun.

David: Did you prefer hearing it at Black Hat or at CRYPTO slash did you actually hear it at Black Hat?

Deirdre: I think I saw it very briefly at Black Hat and then I saw the whole thing, their whole presentation at CRYPTO. I really wish they went into a little more detail. Cryptographically, they went into detail about their attack. But uh, at CRYPTO, I. But it was just very funny to, it was in a session with a bunch of attacks against cryptography and them not having, I think they were better suited for a Black Hat audience than a CRYPTO audience, because they literally didn’t have any sort of like, you should probably tweak it like, so to make this harder for me, they didn’t have any suggestions like that because they’re, they’re much more of an attacker than a crypto builder, I guess.

I don’t know.

David: And what was it like being in Santa Barbara during a hurricane for CRYPTO?

Deirdre: Fucking awesome, because there was also an earthquake. Um, the, the hurricane was lame because we were, we were far west enough that I was like, is, you know, is this your king? Is this your fucking hurricane? It was, it was lame. It was a little bit of wind and a little bit of rain, and I was talking to my parents, I was like, this is a average day in Ireland sort of storm. And then I was like, waiting for the wind to start or, exist at all, and I felt a little wobble and I was like, oh, I wonder if the wind is blowing. And then like I looked outside, I was like, oh, it’s not wind, it’s not really blowing right, right now.

And then two minutes later it was like, ‘earthquake!’ And I was like, oh. Great. In the middle of my lame hurricane to get a lame earthquake. Cool. I’ve, I’ve checked all the boxes for my trip to whatever for my trip to California. It was nice. It was nice to see people, it was nice to eat the strawberries. It was nice to be, uh, living in a dorm for a week.

That was fun. I got to meet some old timers.

Thomas: You were talking about threshold, ECDSA and I tuned out and you went all the way off the end of Threshold ECDSA. And somehow you guys were talking about hurricanes while I was reading the last crypto thing from Black Hat, I wanted to talk about. Did you guys JWT thing?

Deirdre: Oh, I, no, no, no.

David: had not. I’m really bad about going to talks when podcast.

Thomas: Oh, okay. So in, in fairness, last time I went to Black Hat, which was the one before the pandemic, so Wow. It was a while ago, but last time I didn’t leave the hotel bar once. I, my room was like at an elevator above the hotel bar, so I would like get up in the morning, go downstairs to the bar, hang out in the bar.

I think we went out for sushi once, and then I would just like, you know, spend the day there and then go, it was, my equivalent of a beach vacation is being in a nice hotel room above a bar, right? But I didn’t, I didn’t, I didn’t once set foot on the floor, the actual conference floor. So I, I completely endorse your strategy of not seeing any of these talks.

But, um, the, the last one here was three new attacks on JWTs, which is a subject near and dear to all of our hearts. I remember, like, I remember being a little skeptical about this when it was announced, um, just because, pretty much every obvious iteration of JWT attacks, like the, the verdict among people like us, let’s say, I was gonna say the verdict among crypto literate people, but instead, I’m gonna narrow that down to people like us, is that the problem with JW T is JWT is bad, right?

But, um, in the service of enumerating badness, we’ve got three new attacks here. One of them is confusion between signed between RSA signed. And RSA encrypted JWTs.

Um, this is a wrinkle I haven’t thought about because the idea of RSA encrypting a

David: Wait

Thomas: seems ridiculous to me. Um, but apparently people do it right.

Um, I’m, I’m outing my own ignorance here, right? But like, so it’s obvious to me. It’s, it’s obvious to me why people would RSA sign. Why, why people would use, you know, RSA signed JWTs. ‘cause it’s way more convenient than doing key management, which is a topic I will get into when we talk about macaroons at fly.io, right?

So I, I have a newfound appreciation for why people use public key signatures and tokens. Um, but people also, and jwt, JWT will let you do this. People also do RSA encrypted tokens where the validation of the token is ‘does it decrypt properly?’ and so you. You can get situations apparently where like the developer will, like you have an endpoint that will accept this is, I’m saying these words out loud and I’m trying the, the talk comes with actual exploits.

Like he found these attacks, right? This is all real stuff, right? But like, there’s an endpoint that will take a token that is either RSA signed or isRSA encryptedd. Like either of those two things could be true. And then so like the, the, the vulnerability there is you can get stuff signed. Um, or you, you could, you could encrypt something with the pub, with the key for a signature and then by decrypting it, it’ll verify correctly.

This is all, if you just pieced together what you would do, if you had a thing that arbitrarily used signatures and encryption with the same key pair, like it’s the obvious set of

David: you get something signed and then you feed it to something, expecting to decrypt it, and then suddenly like anything that you can get. Also as you could construct, so that it dec cribs valid and I think vice versa.

Thomas: Yeah, the other way around this case, it’s, it’s the, you get it encrypted and that, and, and that verifies it so.

David: That’s interesting. ‘cause the usual like textbook, oh, you screwed up your algorithms in a JWT is the hashed, uh, uh, is like HMACs confused with RSA public key where you end up using the public key as the secret for the HMAC and then that’s, well, it’s a public key, so it’s not

Thomas: yeah, exactly. So, so when I was thinking about that, I was assuming it was gonna be some like small wrinkle on that, and it’s conceptually a very similar attack to that. Um, the precondition for that, of having an end point that is confus about whether a token should be signed or encrypted. Sounds crazy to me, but, okay, it’s JWT, so whatever.

Um, sure.

David: We’ve talked about this ad naum in previous podcasts, but I think this is why I like We slash I, it was like you can use JWTs but you need to like hard code all of your parameters and not accept other ones like is exactly this reason. So you end up only having the one set that you care, which should probably just be like ECDSA signed ones.

Deirdre: Yeah,

Thomas: I, I’m, I’m gonna try to describe problem to you. Um,

Deirdre: no.

Thomas: it’s, it’s, it’s, it’s difficult for me because I have a slide in front of me that has JWS compact serialization compared to JWS flattened serialization.

Deirdre: What?

Thomas: so in J W Ss, there are two ways to serialize data.

Deirdre: No

Thomas: JWT style, which is base 64 strings separated by periods, which is, you know, you know it, you love it, right? Like that’s, and then there’s for, for, for reasons passing, understanding. There’s also flattened format, which instead of base 64 strings separated by periods, which, in fairness is a stupid format, right? There’s just JSON of that,

David: Hey, it’s easy to parse in a header.

Thomas: you have, but you have to parse it. So, but

you could also just in.

Deirdre: and coatings and whatever. Yeah.

Thomas: Instead of doing that, you can just have a JSON Blob where it’s like the first blob is this base 64 string, and the second blob is this one. Just key value. Key value. Key value, right? There are endpoints and issuers that will alternately use either or, right? So like JW Crypto, the J JW crypto Library, will first try to decode a signature as JSON. it fails, it will fall back

Deirdre: Oh no.

Thomas: DOT separated base 64 strings.

Deirdre: Oh, no.

Thomas: It’s, they, they really have gone out of their way to make a jungle gym out of this whole system, right? And then there are other things that only use the flattened JSON version of it, right? So you can sign, you can have a signed token, and then because of the JSON Flattened format, you can also take a valid signature and then add a bunch of additional JSON keys to it.

Um, It will parse it as if it was, oh, this is the flat and signature, the signature verifies. But then when you pass it off to the application code, it’s like, oh, these are just, this is signed data. This is great.

Deirdre: oh, no.

Thomas: those polyglot tokens, which is

David: Oh, Jesus.

Thomas: it’s wonderful.

David: again, like every time that I’ve used JWTs, I’ve, I’ve, I joked about, it’s easy to parse because I’ve doted my own parsing of that header. ‘cause like if you have a sane JWT, like library, it probably isn’t actually reaching into the http header for you. And then like you can try and find some library to tie it together.

Or it can be like string dot explode dot, you know, if error does not equal nil. And then like, pass the three parts into your JWT library yourself.

Deirdre: Mm-hmm.

Thomas: The third vulnerability here is. It, it has high humor value. It doesn’t have high real world use value, but very high humor value, right? So in addition to being able to encrypt a token with RSA or sign a token with RSA, or authenticate a token with an HMAC or whatever else you can do, you can also encrypt a JWT with a password.

I, I, I don’t, whatever.

Okay,

Deirdre: does it

Thomas: the, for that, for that reason.

David: With a password Deirdre. You encrypt it with a password.

Deirdre: It’s like,

David: Hi, I’d like to tell you about David’s DLP solution where re-encrypt your data with a password.

Thomas: So look, I don’t know, it never even occurred to me to look at password based JWT

Deirdre: Oh my, I didn’t

Thomas: but it apparently

Deirdre: Uh,

Thomas: us did. And we only know it’s a thing because the only reason it was in the spec was for somebody to find this and write this attack for it, right? So,

David: Watch as it turns out, like it’s using like HPKE or like some super modern

Thomas: is

nothing.

Deirdre: it, HPKE only became real like less than a year ago, so I, but sure, like if it uses a real K d F, like on that password, I will be shocked.

Thomas: It’s a real K D F, so

David: just don’t understand, like, how, why wouldn’t you just do the like, like we have symmetric, like HMAC ones, like why do we need, I guess because you, you’re doing secret management by like, I remembering it? Like, I can’t remember, a 32 byte secret. I don’t wanna put that in source. I’ll just remember that my JWT password is super secret password.

Thomas: Bearing in mind that this is not like the, the most world breaking attack ever. I, I guess it’s actually strictly speaking, the most breaking attack of these attacks. But I’ll just give you the token headers for when you’re doing password encryption and you can see if you can guess what the vulnerability is here that he’s documenting. Right? This is not me dunking on this talk. I, this is, this talk was surprisingly, I, I was surprised by how much better this talk was than I expected it to be. This is good work. Um, not that you need me to say this author of this talk, but I’m just telling you that I like this talk.

So the keys in the token header for this are alg, which is just the algorithm that you, that you’re using to encrypt the token with, right? And then there’s enc, which I think is the K D F that you’re using. I think that’s what it is. I’m not sure. But there’s an, which is just another algorithm, right? There’s P2S, which is, I

Deirdre: Password

Thomas: string.

like the hashed password ha, the password hashed string.

Deirdre: Hmm.

Thomas: And then there’s P2C, which is the iteration count on.

Deirdre: Oh, so literally you can just tell it like, don’t, this is like.

The, what’s

David: tell me it’s

zero.

Deirdre: Like, oh

Thomas: oh I don’t know about zero. Zero is a zero is a good thought. Right. But the, the, the vulnerability here is you can set it to like 4 billion and then just every time you try to verify the token, it’s have to do, like solve Bitcoin or whatever. Right. It’s, it’s great.

Deirdre: That’s awesome.

Thomas: So it’s a doss, right? But it’s like, but it’s, it’s, it’s, it’s a very beautiful DOS.

So,

Deirdre: Yeah, there’s no range of valid. Count P two C in the spec?. Like,

Thomas: I mean, I’m, I’m sure somewhere in some spec there is a range of possible valid things. It’s like somewhere else it says, don’t use the same key pair with both. Don’t accept both signed and encrypted tokens. Pick one. I’m sure that says that somewhere. So Tom Vert., The author of three new Attacks against JSON Web tokens.

It’s online. You can read his web paper, uh, his white paper on the Black Hat site. Um, I like this talk. I’m happy this talk got accepted. This is good stuff. Um, thank you for giving us a solid 10 minutes of funny stuff to talk

about on this podcast.

David: Mm-hmm. There was one other crypto talk that I don’t actually know if it was on the cryptography track or not.

Deirdre: Oh God. Oh God. Okay, so this in his talk, a section from the spec, P two C, blah, blah, blah. It just a minimum iteration count of 1000 is recommended, but it does not seem to specify like a constrained range. It’s just sort of like it is an

Thomas: Yeah, there’s your, there’s your zero. No one’s gonna do zero.

Deirdre: Oh God.

David: This goes back to a broader point of if you’re using JWTs, anything that is a parameter in the JW T, pick one and hard code it into your code and do not read it from the input. Like just ensure that it matches your hardcoded thing and then do the hardcoded thing.

Deirdre: Yeah.

Thomas: So, uh, what else happened on y’alls summer vacations? I got the greatest monitor upgrade of my life, which is that I now wear reading glasses.

Deirdre: And that’s probably the cheapest too, out of all your possible options.

Thomas: It’s pretty

David: Um, I was gonna say, don’t you normally work on a couch? Like do you even use a monitor?

Thomas: Well, no, with my laptop, right? But like,

yes, it’s still like this is far greater than any mo monitor I’ve ever owned. Is this way, way better than the retina upgrade? It’s like I put the glasses on and I’m like, this looks stupid. I look stupid. And I took the glasses off like. Holy shit, I’m a fucking idiot.

I can’t believe I’m, I don’t know. I don’t know what I was doing in the months leading up to it. Clearly not looking at anything. I feel like my brain was just like putting the words together from the context cues and stuff. Like your brain can sort of sort out things. Um, you know, if you scramble up the letters, I can still read things.

I think that’s what my brain was doing. ‘cause now I’m, I’m, I’m completely dependent on the reading glasses. It’s great. I’m very happy about this. I have a much better monitor. What else happened in your summer?

Deirdre: What else happened? Um, a cool thing that came out a couple of days, weeks ago, I don’t know, time times a flat circle, Google, and I think Yubico, um, and some researchers at ETH Zurich implemented and designed a post quantum secure variant of FIDO2, like a, you know, a post quantum resilient like YubiKey basically.

And this is, this is just cool. Like, there’s a couple of interesting things about this. There’s one, the design, which is, I’m gonna go on a little rant about this in a second. Uh,

Thomas: uses a quantum processor.

Deirdre: uh,

Thomas: I’m looking at a picture labeled quantum processor.

Deirdre: I, I have a feeling that’s just a brand name. I just love to call, whatever, names, don’t mean anything anymore. One nice they used Dilithium, they used the Dilithium signing algorithm, which is related to kyber. It’s like one of the three things that came out of the NIST post quantum competition that finished like a year ago or less than a year ago.

It might not have been a year ago. It might have only been six months ago.

David: Also, it depends on what you mean by finished. ‘cause they’re like, we picked the things now we’re still gonna dink around for, uh, for another year to actually standardize them. Not because they’re going slow, but because after you picked the algorithms, there’s actually still a lot of work to do.

Deirdre: Yes, exactly. Uh, I think they came out with some draft specs for these three things literally a week ago. So there’s stuff happening, but there’s also more stuff happening with signatures and we’ll talk about that later. Basically they nest these signatures. So they, hey, you have your message, you’re trying to sign, whatever that is, a challenge from, you know, your, you know, web service that’s doing a you a FIDO2 challenge to you.

Um, and you need to sign it with, by the signing secret key, uh, that corresponds to the verifying, uh, public key that you registered with the service when you did your FIDO2, uh, registration thingy dance. So the way they updated it is that, you have your message, you sign it with the classical signature scheme, which is ECDSA, and then you sign the message and the signature, the classical ECDSA signature, with your Dilithium signature.

Um, and they call this hybrid, and I guess it’s technically hybrid because it’s classical and post quantum, but it’s nested. So if the post quantum scheme breaks. Which seems to be, you know, we’ve had breakage of post quantum schemes in the recent past, you still have an ECDSA signature over a message that you have to verify as well, but you can verify the post quantum one as a sort of a no op if you don’t turn that off. If your ECDSA breaks because of the quantum computer comes online, um, you’ll verify the post quantum one. And the ECDSA part becomes a no op or you just skip it, but it’s just a blob that you’re signing over. So there’s that.

This is cool. Uh, they implemented it, they implemented it in Rust. They were able to, uh, get it small enough for, uh, such a constrained hardware target. They only require 20 kilobytes of memory. That’s a nice achievement because you know, some of these you know, lattice based post quantum schemes are a little bit big and, you know, we don’t have as much experience on implementing them and deploying them for, uh, constrained devices, uh, let alone with Rust.

So that’s very exciting. I’m mildly annoyed, because we’ve been talking about hybrid protocols using classical and post quantum primitives in the context of things like TLS or say signal anywhere, you might use Diffie-Hellman, and the way you use them in a, in a hybrid setting for that is you do your classical elliptic curve Diffie Hellman.

You do your post quantum whoosie-whatsit Kyber,

Thomas: then you just H K D F from together.

Deirdre: Yeah, you can cap them together and then you just take that blob and you H K D F them or do whatever your K D F is. They’re very side by side, right? In this setting, they are nested, and I’m annoyed. I’m not annoyed because it’s a bad design, because it makes sense in the kind of, Signing cascade of what you’re trying to assert and, you know, commit to with these signatures.

So the, the post quantum signature is committing to both the message and the classical signature. The classical signature’s not going the other way around, but in theory, if the, the quantum computers come online and the post quantum signatures are the thing that are long lived, we’re okay with that. I’m just annoyed that they’re both called hybrid, because one of them is nested and the other one is not. One of concatted and one of them is nested. I’m annoyed.

Just

David: I make your day even worse and tell you about another hybrid?

Deirdre: No, is it a car?

David: No. Um, in crypto for, for post quantum stuff, there’s like a proposal at IETF, I think I T f. That was for like starting to talk about shoving post quantum signatures into X.509. Which like, but probably gonna happen at some point.

But, um, they were like, you know, to do hybrid there, the proposal was to mesh with the internals of ECDSA and Dilithium to try and create a single signature that was somehow both

Thomas: yes, yes.

Deirdre: no, no. Do not, do not pass go, violates all the fucking proofs we have. If they fucking try, I’m just gonna like, just show up at their house and just be like, hold up like a stack of papers that be like, where in here do you see this Frankenstein of a signature scheme? And the answer is nowhere. An edit: I looked into this. The draft is basically nothing so far, and we may be able to steer it in a better direction.

David: Um, for, um, I, I agree that’s a bad idea. However, um, before Thomas makes his, uh, uh, Uh, inevitable comment about I, the IETF in general, um, I will, I will say that I do not think this is reflective of that specific problem with the, IETF for, for unrelated reasons that I’m not going to talk about on this podcast.

Deirdre: Um, related to internet and post quantum kyber in browsers. There’s something on here.

David: Um, I

Thomas: Kyber is the chem version of De

Lithium. Right.

David: Uh, well, no, Kyber is a, it is the key encapsulation mechanism that, um, NIST standardized. Both Dilithium

Thomas: like the.

David: are lattice based, but I don’t know specifically how much actual overlap there is in the

Thomas: like the same group of

David: bother to learn the math.

Thomas: it’s like the same group of people and like they, they came up with both a signing scheme and a key exchange scheme.

David: correct. Correct.

Thomas: Yeah.

Okay.

David: Um, yes. ‘cause it’s the kyber crystals and the Dilithium crystals. So we have Star Wars and star

Deirdre: all a bunch of fucking nerds and all

Thomas: those both, are those Star Wars and Star Trek references?

Deirdre: Yes.

David: Yes. Kyber is Star Wars and Dilithium is Star Trek.

Deirdre: And we had a whole series of, uh, ring l w e lattice based things that were all Frodo and other Lord of the Rings related names. So we’re all a bunch of fucking nerds.

David: One thing that does get complicated with the, aside from the fact that like these KEMs are just like big, right? Kyber 768 is, you know, 768 bytes. And then plus, you know, some crap for formatting it. Uh, and then, you know, Kyber 1024 is 1024. Because they’re so big, like you can in like 1.3, TLS 1.3, if you’re like, oh, I can do ECDSA and I can do like X25519, even though those are different purposes and or I could do y you know, like you can just kind of shove all these key agreements in the one handshake and it’s like, eh, it’s 32 or 64 bytes.

Who caress you? You can’t really send Kyber 768. And Kyber 1024 in like a single key share, right? Because well, I mean you could, but it’d be stupid, right? So you, so you kind of have to pick, the internet needs to pick one, um, in general. And there’s disagreement among people of, uh, whether we should be using 768 or 1024.

Um, but we should probably just be using 768. That’s what, uh, Chrome is doing at the moment. But certain stakeholders prefer 1024.

Deirdre: Interesting. I need to go implement Kyber myself. But do you want a signature scheme that’ll give you 170 bytes for your signature?

David: What I, I would love a signature scheme— so, um, before I like get myself into trouble, I’ll just say I would love a signature scheme that is post quantum secure and under 200 bytes, ideally 64 bytes. But like, you

know, I. I, I, uh, I still need to, we’ll say roughly 200 bytes for now. I still need to like, sit down and come up with an actual rationalization for that number instead of simply pulling it out of thin air.

Deirdre: Uh, we have a lovely isogeny based signature scheme that’s not complicated at all. That’s less than 200 bytes on the wire and it, you can sign in about 420 milliseconds and verify in seven milliseconds. Just

Thomas: how many, how many cycles does it take to forge a signature?

Deirdre: Don’t think about it. Don’t think about it. No. This one, this one is, uh, not broken to hell yet, but it is, uh, it’s a bit complicated.

But if you’re looking at Falcon and you’re can deal with that implementation complexity, like you should consider SQIsign.

David: So I, I guess the thing to add here is NIST did start another competition. Um, well, they actually started a while ago, a while ago, but the, like, first round of it, the submissions were due like a month ago, specifically for making short signatures because one of the problems is that for the last 15 years or so, we’ve solved all problems at TLS by slapping another signature onto it. So when you do a TLS handshake, there’s anywhere between like five to seven signatures in the regular course of things because of SCTs plus certificate chain, plus just signing the key agreement message. And if all of those were like a kilobyte plus, you’d be sending, you know, a non-trivial fraction of a floppy disc on the handshake of every connection, which is just like clearly not feasible.

Like even Kyber itself is kind of not feasible in the sense that it pushes the ClientHello into like two packets over the 1500 byte threshold for a single packet. And you can’t even, like we tried bit fiddling and cutting stuff out of the hello to make it smaller. It doesn’t work.

Um, and to say nothing of that like, uh, uh, would happen if we just swapped all the signatures. Uh, so that’ll be a tough problem to solve.

Deirdre: Yeah.

Thomas: We use

David: NIST is doing a competition for, uh, shorter signatures. But like I, there’s an open question as to like, I’m sure no, no offense to Dilithium. I’m sure if we do another competition we could come up so with something that’s like better than Dilithium size wise, because we had another three years in the competition for it.

But like, does that mean we’re gonna get a 64 byte signature out of it? Um, probably not.

From what I can

Deirdre: not. Um,

David: byte maybe? I don’t know. I still think probably not from I’ve, this has been, if I see a cryptographer roaming around, I ask them this question and then I get different answers.

Deirdre: Yeah, I think it’s quite early. Um, the P Q C signatures, they say the, whatever they’re calling it, additional, the standardization of additional, digital signature schemes. I don’t know if they ex, I have to go reread the, the call, but I don’t know if they explicitly say we want shorter ones. I got the

David: They do actually, like one of the things they list and the motivation for it is that like certificate transparency wants short signatures. So one, one kind of fun thing about certificate transparency though, so like if you have two or three SCTs per, per cert, right? You know, that’s, let’s say three signatures.

Um, but the keys are, were basically predistribute, so you can kind of suck up a larger key size in the case of ct right now, we always use, we use ECDSA keys, but, uh, the reason is you get the, the small signatures. But like if you had a five K, uh, again, I’m making all of these numbers up. Um, but like you could probably suck that up with the predistributed case, even though that would be totally unsustainable and like a larger than X.509 certificate right now.

Um, public, he says,

Deirdre: I think if SQIsign stays alive, it would definitely be very attractive for this. The public keys are small, like comparatively small. They’re just kind of also just small. The signatures are small, the compute cost is coming down. It’s just a question of like, we have never tried to implement these sort of algorithms in constant time before and like efficiently before, like we still have like there’s been some nice work published in the past six months to encourage that, but also we all remember what happened to SIDH and SIKE, so

David: was gonna say, have we, uh, have we checked to make sure there aren’t any papers from the nineties in the math department that just fundamentally break our scheme?

Deirdre: No, no, we haven’t. Not to my knowledge.

David: Where? Where in lit review do we do that step?

Deirdre: I don’t know. There’s also some like, uh, alternatives of using like higher dimensional abelian varieties, the shit that was used to break, uh, SIKE and SIDH to construct a variant, a SQIsign, blahdi, blahdi, blah, but that’s not the one that’s been submitted to NIST, but yeah, it’s, uh, extremely attractive, but quite early days, like SQIsign’s only been around for three years, so.

Thomas: I look forward to the episode of the show where we have the person on who breaks SQIsign and

explains the math that we’ll never understand

David: There,

Thomas: Richelot identities. That’s what I, that’s what I remember about this is Richelot ident. isogenies

Deirdre: yeah, they’re in there

Thomas: I still don’t know what they are. I was told I would never understand them.

Deirdre: Just

David: there’s a reason I did not, uh, uh, take part in that episode.

Deirdre: Just think about Smushing donuts together and whether they stay two donuts or they become one mega donut, and then that’s your oracle. That’s all you need to

Thomas: Now I

have

David: if I, what if I prefer my metaphors to be coffee cups? Is that still isomorphic?

Deirdre: Yes, actu— Well, that’s, you know, you have to actually, let’s like punch a hole in the bottom of the coffee cup, but sure. We’ll make it work. It’s fine. This is, this is your introduction to higher genus, abelian varieties.

David: While we’re sort of talking about X.509, I do want to go back to Black Hat briefly ‘cause there was one X.509 talk that there, that was, that took place there called a SSL Slippery Slope. Um, for whatever reason, I have found myself explaining to people recently why Authenticode was a mess. Um, and this is another reason that Authenticode was a mess.

So Authenticode is like the thing that Microsoft created to sign drivers basically and other software right. So if you’ve ever clicked through one of those errors that you ignore, that’s like this thing wasn’t signed to install some software, um, that’s Authenticode. You may also know it from uh Stuxnet had a valid Authenticode signature.

Um, anyway, when you’re checking for that, right, you don’t have like a domain name to compare in the common name, right? So you kind of, just like any signature basically works as well as the change to a root, for Authenticode. Problem is how did you find a root for Authenticode? In the olden days, we kind of just used one root store for everything, and then we just applied key usage bits to various, um, certs or signing operations.

And then like, so if you’re checking an Authenticode signature, um, like Windows does this correctly, for example, but like, you are probably using the Windows like root store and then you have to enforce that, like the code signing bit was set on all of the certificates that go through it. Otherwise you could just, you know, get a certificate signed by, Let’s Encrypt for a random website, then use the key in that certificate to sign your binary. And because the root store is shared, if you don’t check the key usage bit to notice that, well, that Let’s Encrypt certificates actually for the internet and not code signing, then you can get your uh, uh, Authenticode signatures to verify.

Um, now to be fair, Microsoft itself didn’t have this problem, but some other libraries did.

And so that’s what the talk was about. It’s like you should probably check key usage bits. The alternate fix for this, um, is to like, To have dedicated PKI hierarchies for separate things, right. And then have separate stores for them.

Uh, but for legacy reasons,

Deirdre: uh,

David: like a bunch of stuff will chain back to like some old root certificate created years ago that’s used for everything I.

Deirdre: and like is that 20 years old or 10 years old or,

David: Um, I’m not even sure but uh, the speaker here, their name was Bill. I don’t remember their last name, but they’re very impressive ‘cause they both worked full-time at Microsoft and were still an undergrad. Um, as opposed to us who were just podcasters. And Tom, as you just learned about glasses, so like some people are going places.

Thomas: I feel like this was a pretty good catch up. feel like we’re caught up.

Deirdre: Wait, do we wanna do threat model e t e for on the web?

David: No.

Deirdre: No, I wanna do it. Do wanna

David: I can do that next time. Yeah, I, I think I’m gonna probably end up writing a blog post.

Deirdre: Cool. All

Thomas: all my things I can do next time. It’s totally fine.

Deirdre: Yes, very good. Catch up. Good summer. Busy summer. Good summer.

Thomas: so we’re back into it

now. We’re

David: might even say it was a cruel summer.

Deirdre: Cruel Summer. Yeah. I think we’re back into it. See you soon.

Hey, we haven’t asked this before ‘cause we don’t like asking our audience for things, but if you would like to review us on the Apple Podcast, Store or wherever you listen to podcasts and they let you review stuff. How about you give us a cool review?

It’s really nice and it helps people find us.

Thomas: Will we give them a coupon for stamps.com if we do that?

Deirdre: No, we’ll give you thank yous

Thomas: Why do we want reviews?

Deirdre: because it, it helps show us up higher in like Apple and all the other, you know, podcast thingies that scrape from Apple and show us as like, Good.

Thomas: I thought we were like the cool bar in Swingers that doesn’t have the sign above it. You just have to know about us. It’s fine. Review us if you wanna like reveal us to the the non cool kids. That’s totally fine.

David: Or join in in the YouTube comments when, ‘cause we do release episodes to YouTube and we get great comments like, "the art of how to talk about nothing", or, "these people are, don’t know anything about what they’re talking about",

Deirdre: Yes.

David: which I

Thomas: In fairness.

David: upfront about the fact that we don’t

Thomas: I was

David: what we’re talking about.

I

Thomas: say, I feel seen.

David: open every episode. I.

Thomas: Well, it is good to talk to you guys again. I look forward to our next episode. Whatever it is that we don’t know what we’re talking about, talking about again,

so awesome.

Deirdre: We’ll, figure it out when we figure it out. Okay. Bye.

David: Security, cryptography, whatever is a side project from Deirdre Connolly, Thomas Ptacek and David Adrian. Our editor is Netty Smith. You can find the podcast on Twitter @scwpod and the hosts on Twitter at @durumcrustulum, @tqbf, and @davidcadrian. You can buy merchandise at https://merch.securitycryptographywhatever.com.

Thank you for listening.