Nate Lawson II

Nate Lawson II

This episode got delayed because David got COVID. Anyway, here’s Nate Lawson: The Two Towers.


This rough transcript has not been edited and may have errors.

Deirdre: Hello. Welcome to Security Cryptography Whatever! I’m Deirdre.

David: I’m David.

Thomas: I’m Thomas.

Deirdre: and we are here again with part two of our chat with Nate Lawson. Hi, Nate

Nate: Hi

Deirdre: Hi. Who wants to summarize what we talked about last time? It was a lot.

Thomas: I think we were like starting with the idea that pretty much all of modern cryptography is somehow traceable to Nate Lawson. And then we talked about Nate’s weird background and how it might have come to pass that that was the case.

David: Did we settle on a cryptopals connection? Because there is a direct connection from cryptopals to the security class that is currently taught at the University of Michigan because I was nothing if not lazy in grad school, and reappropriated some of its content for a variety of the projects, including e equals three and padding oracles.

Thomas: I feel like the major Nate to modern cryptography connection is through Thai Duong. And I say that cuz Thai was working with Juliana Rizzo on the BEAST attack, like kind of roughly before he joined Matasano which is the consulting firm I was running like the mid two thousands, right? That was before Cryptopals.

But he had joined us because of, I think, blog posts that we had written with Nate Lawson on the Mozilla e three RSA attack, and I think probably the CBC padding oracle thing. Um, and, and then CryptoPals happens. Um, and then CryptoPals gets you, I guess to some extent you, and then also people like Filippo Valsorda.

Um, I don’t know how credible any of this is, right? Because Thai Duong and Juliano Rizzo would’ve done amazing things with or without a blog post that, uh, that Nate inspired. But I’ll, I’ll make the connection anyways.

Nate: Thanks. I, I think they stand well on their own.

Thomas: We have to say that, but I don’t, I don’t know how true it is.

David: All right, so last time I think we kinda left off with, uh, Sony, which I don’t know, we’ll call this mid to late two thousands. So what comes next?

Nate: Uh, yeah. So timeline wise, um, I left Cryptography Research in 2007, um, and started my own consulting company, Root Labs. And the real focus I was going after was this combination of embedded security crypto systems and what I call kernel or reverse engineering, which was just like any low level software that needs to talk crypto and work with devices.

And it was kind of fun because in constrained systems, like embedded systems, you get to have very few resources. But you have challenging problems. Like for example, the attacker is holding your device in their hand. You’ve got kind of a little cheat code there, which is that a lot of the security things start on desktop software or server software.

So, you know, if you’re talking about memory corruption protections or things like that, that kind of starts on the server side or, or the browser side and gets deployed there first when you’ve got lots of cpu, lots of ram, et cetera. And then you’ve still got these old systems where it’s using like 1990 C code and not so secure.

And so you can make a big difference with just bringing the same ideas downstream to, to embedded devices.

Deirdre: Okay.

Nate: So, uh, yeah, so I launched a consulting company. I hired a couple people and my, my approach to hiring was always like, just go to college campuses, offer to give a guest lecture on crypto, and then whoever sort of sits in the front and pays attention, you know, offer, offer jobs to, so,

Deirdre: Holy crap.

Nate: It generally worked okay.

Like people were actually asking questions and engaged, you know, were interested and, and it was kind of like an internship as well. It wasn’t quite a full position, but, um, that was a lot of fun and we got to review a lot of systems. Uh, kind of a continuation of the style of work at Cryptography Research and design some systems.

Thomas: This is roughly the same time when you were working on the toll systems, right? Like that was, you did that while you were at Root Labs. The, the toll system thing is fun. You should, you should tell us about the toll system thing. I, I wanna hear the, I wanna hear the toll system story.

Deirdre: I have no idea about the story,

David: Like, are we talking about like turnpikes here?

Nate: kind of like that. Yeah, it was, it was a, it was a fun thing. So, uh, in the Bay Area there’s something called Fast Track. It’s called Fast Pass in other parts of the country, things like that. And they have different systems, and this is an interesting connection between like embedded systems and that you’ve got these transponders in each.

And also the poles, the light poles that transmit the signal to the car. And then you’ve also got governments and regulations involved here because there’s all these things about, you know, traffic laws and, and regulation of how much money you can collect and how they do it and stuff like that. And not really a big focus on privacy at the time.

Um, maybe it’s gotten better since then. It’s been been a while. Hopefully it’s gotten better. Maybe cell phone tower location has actually become the bigger problem nowadays, nowadays. But, uh, what I did was I just decided to take apart a fast track transponder. I, I’d never actually used them, but I was slash like, I wonder how these are built.

So, pulled it apart, found MSP430. So there’s your link to MicroCorruption as far as the MSP430 the processor for that

Deirdre: Right.

Nate: And, uh, yeah, so MSP430 is a really power efficient process. Uh, kind of funky instruction set. But um, yeah, so I opened it up, um, tried to dump, jump dump the flash from it and sure enough, like the one I had was actually unlocked.

Um, so I got flash image off of it. I got several transponders though and some of them were locked and, um, you know, Chris Tarnovsky, uh, was able to using like a laser, zap the fuse on it and um, unlock it so the flash can be read off it . So

Deirdre: Was that a laser at Root Labs? Did you, Do you have a laser? Oh, okay.

Nate: No, I don’t do any real, real physical securities. I’ve mostly just board level work of like, you know, soldering to, to surface mount is about the extent I go to no fuming nitric acid in, in my office yet . But Chris is great at that stuff and uh, yeah, he was able to just, um, really quickly knock the MSP430 fuse out.

So anyways, we looked, got the, um, firmware, I started reverse engineering it and I got to do something really fun, which I was going on vacation, so I printed out the entire firmware listing cuz it was, I think it was like 8K or something. It was pretty small.

Deirdre: is what you do on your vacation.

Nate: It’s like a, it’s like a crossword puzzle.

You know, you’ve got an assembly listing and a pencil, and a pool, and you sit there by the pool and you, you mark up the assembly language listing with a pencil. I found a bunch of interesting things about the protocol it used and, and how it worked. It’s kind of a neat thing because the, they, they wanna be low power and then make the battery last almost forever, so it doesn’t actually transmit.

Instead what it does is it changes the polarization of an antenna in the device to bounce back reflections to, to talk in the reverse channel so that it gets transmitted stuff and it gets a carrier wave and it, it modulates the carrier wave back to the transmitter to respond. Yeah.

Thomas: Hold on. I’m gonna stop you. I’m gonna stop you right there. You got that from a paper print out of the raw assembly listing?

Nate: Yeah, well I looked at the board too, you know, I saw, I had photos of the board, so yeah, so I could see like how the, how the antenna was set up and stuff like that on the board. It was just PCB traces. Anyway, so I sawed a bunch of wires to it. My, my BlackHat presentation had like, uh, a picture of that on the front cover in order to be able to trace all the signals of my oscilloscope and things like that.

And, uh, so I just played around with it a little bit. But on vacation I found that there looked like there was actually a exploitable stack overflow in this, so it had a message that was not really fully implemented. So they had a switch statement that was handling all the different messages, and one of ‘em was like an update process.

And I think they were using this maybe even in the factory for programming the board, uh, with a serial number or something like that. I’m not really sure exactly what they were using it for, but anyways, it had this reserved protocol, um, message that would, when you send it, it would like do an update, but it wasn’t really well written.

So it was supposed to have like a balance check on the update, but it was, I think they had a sign extension bug or something like that in the, the length check. And so you could actually send it and update with an incorrect length count and it would overwrite areas of flash that included the code area

Deirdre: No. No.

Nate: now get code execution

Deirdre: Oh, no.

Nate: And I was just like, okay, wow, this is crazy. So you can just like sit by the side of the road with an antenna and automate little box and like reprogram everybody’s transponder to different serial numbers. You could reprogram the code and have code execution on it if you want, if you wanna do something funky with that. Yeah, free tolls, you know, give everyone the same 1, 2, 3, 4 id. So you can’t really disable it because everybody’s transponders that, or you have to do a recall. So it was pretty crazy. And so I wrote a blog post about this. I got a BlackHat talk, and I had offered, I knew that the, um, transit agencies probably were going to be not very happy to hear about this, given how many millions of these things they had deployed. I wasn’t getting paid for it anyways. It was just a fun side project. So I didn’t wanna get into a heavy like vulnerability disclosure kind of situation where I’m spending hours and hours trying to help someone fix their assembly lines of this thing. I just wanna like, give them information, hopefully get them to fix it.

But of course there had to be some disclosure, otherwise they weren’t gonna fix it at all. They’re just gonna sit on it. So I, I talked actually to some local news programs and things like that, and they did some video on it. And of course the MTA was like saying, Oh yeah, this is no big deal. You know, it’s, it takes a really sophisticated person to exploit this.

I’m like, Yeah, once you know the bug, it’s not that hard to exploit . It’s like one shot, one way. You just transmit it, broadcast it in a freeway and you, you, you got all the cars owned at that point. So yeah, so it kind of went back and forth and they just kind of, I think sat on it and died down. I don’t know if they ever actually fixed it.

I offered to help them like show up and give them a talk and explain the problem and, and walk them through the code so that their vendor could fix it. But, uh, they never took me up on it. So that’s kind of where it ended up.

Deirdre: mm-hmm.

Nate: I didn’t get sued though, which was great.

Deirdre: That is great because there have been other like public transit agencies where you disclose problems with their, you know, their swipe cards and you know, all this sort of stuff, and they’re like, not happy about it, . So I was like a good thing you didn’t get to

Nate: This led into another side investigation, which was power meters. So,

Deirdre: Like on the side of your house?

Nate: Yes. Yeah, so the digital power meters they had, they were just in the process of swapping everyone’s out for remote monitoring ones and I didn’t like that idea either. So I, I went to try to buy one to reverse engineer it and it turns out that they don’t wanna sell it to the public.

Um, you know, the power meter companies are like, We need to know you’re one of our approved vendors before we’ll send it to you. So I was like, Okay,

Deirdre: eBay at the time?

Nate: not on eBay,

Deirdre: Yeah.

Nate: but, Good, good guess though, because it turns out that like, I was like, Wait a second, there’s lots of remote power meter type monitoring things.

There’s water meters. So sure enough on eBay, the control board for the monitoring for water meters is there. And no one was really sensitive about water meters. So I was like, Great, I’ll get that board instead. So I ordered it off eBay for I think $30 and took it apart with a dremel to get the case off and, and uh, cause they had it all waterproof and things and.

You know, sure enough, MSP430 again, so I was like, Oh, I know these. So, so sure enough it’s unlocked again, so dump the flash and start reverse engineering again. I didn’t have a vacation, so I actually had to sit at home and reverse engineer this one, but. Yeah. I found that it used, um, a different protocol, but it was also similar in terms of the, the basic construction of it to the, uh, transponders.

And there wasn’t a remote update process, fortunately, but there were some other things about the handshake required for remote turnoff that was not really well implemented cryptographically. Um, they had made some attempts at crypto, whereas the transponder just broadcast out its serial number to anyone who queries it.

So it wasn’t really, there was no crypto there, but they had not really done much work. I, I don’t remember exactly what the flaw was, but it was something really elementary. Maybe it was like same key in everything or, or something very, very rudimentary like that. So I was like, this is terrible. Because it’s one thing to design something that you can do remote monitoring of power.

It’s another thing to have remote shutoff. And they wanted the remote shutoff because they wanted to be able to prevent rolling blackouts or give people sort of voluntary ways to opt into. We’ll pay you a lower rate if you let us shut down your refrigerator or your, your house, you know, to save money.

Deirdre: but still,

Nate: But they hadn’t had it all figured out. And it’s like, I understand they only get to roll the trucks once to swap out the hardware. But at the same time, if you’re gonna design a system with that kind of power, you really, or abilities, you know, you really need to think carefully about your, your crypto and, and your, your capabilities there.

For example, having really strong secondary authentication for any actions you need to do, like two factor things like that, to, to, to launch an action. If you wanna shut off the mayor’s power, probably there’s some kind of log that goes through that you probably, you know, shouldn’t just be any random city worker that can do that, or you know, the hospital or something.

So, Yeah. Yeah. It’s just, it’s just terrible. So like, I was like, again, I was like, okay, remote monitoring, yes, there’s privacy implications. They’ll, they’re gonna find out about my giant grow farm in the garage, but, um, or maybe I can spoof a grow farm at my neighbor’s house, you know?

Deirdre: Oh my God. This makes me think about swatting people. And like, if you swap people, they, they come with guns, but you could go like one level below and just be like, Oops, I turned your electricity off. Hur hur hur like an asshole.

Nate: Exactly.

Thomas: A very elaborate way to swat somebody.

Deirdre: Yeah, it is. Instead of a phone call and a, and a and a good little skit that you do to the, to the cops, you, you do a remote attack. I don’t know. It’s a lot easier to get away with

Thomas: There was like a run of time, like right after Obama got elected in ‘08, he had a bunch of high profile cabinet appoint appointees and one of his big ones was I think the Department of Energy. I can’t remember the the name of the guy, but it was a whole big thing and there was

Deirdre: think it was the physicist guy from MIT, that guy. Yeah.

Thomas: So like, there was a big push towards like smart grid stuff.

There had been like a huge power outage. It was like a nation, like a, from like, you know, most of the eastern seaboard power outage a couple years before. So like getting the grid more resilient was a big thing. And like conservation was a big thing. So this is like this huge push towards smart grid stuff.

And long story short, like pretty much every appsec firm in the world now has a story about how they could turn people’s power off. Because there was like, I think there was like six or seven different vendors, um, that were doing different systems there. This is by the way, the reason why every time any kind of discussion about crypto protocols comes up, I ask people whether the counters in their counter mode encryption can wrap. I told that to Trevor Perrin once and he’s like, "Is that a realistic concern?", like are you really gonna rap like a GCM counter or whatever? Cause it’s, it’s a, it’s a huge, like realistically it’s, it’s a huge counter. Right? It’s because I worked on some Smart meter thing, which was all embedded crypto where they were doing like a full on bidirectional RF protocol and the, the message head room they had was very, very small.

So they were doing counter mode, but with a very short, truncated counter, which you could easily wrap. That’s like, that’s actually not really a super realistic problem, but it’s a thing I keep bringing up. Just like, you know, logging into a web app the first time with quote OR quote quote equals quote thinking that’s, they keep popping up.

It’s like, someday I will find another system where you can actually exploit it by wrapping its counter or whatever. Yeah. I don’t know. It’s all those systems were super fun to work with, but like, did you ever, on any of these projects get to the point where you were directly transmitting back? Like did you have a transmitter for the, either the toll stuff or for the smart meter stuff?

Nate: No . And, uh, there’s a specific reason why the answer is no on that. So there’s a fine line between disclosing something and making yourself a target for various kinds of retaliatory action. And that’s, uh, that was. Definitely.

Thomas: Seems total, it seems totally fair. Right? But like if you’re on like an actual contracted test with a vendor, right? Like, it seems like that’s, that’s well within bounds. But I’ve been talking to people every time I hear somebody that’s done an RF project, it’s like, one of the first questions I asked is like, do you actually have the whole RF stack implemented?

Or are you just using, you know, a device that you, you know, reversed as a modem for it? Right? I haven’t really talked to anyone yet that’s done it, but I’m, I’m curious to see if anyone was actually built the whole, you know, kind of communication stack for any of these systems, or if they’ve just injected code into, you know, a meter or whatever.

Nate: Yeah, for these cases, I mean the um, Fast Track transponder was actually very easy to tap into. And so I built my own connector and I could drive around using my laptop and my PCs oscilloscope and record all the IDs of the light poles as we drove past them. Cause what they do is they have two different protocols.

One was like, What’s your serial number? The other one’s, what’s your serial number? And beep. And that one’s for, you go through the Toll plaza and the, what’s your serial number was for monitoring the speed of highway traffic on various segments of the freeway. And so one of the things that, the thought experiments I did and, and talked about my blog a bit, was how could you actually design this in a privacy preserving way?

With the same basic hardware restrictions. So you’re, you’re driving around on a highway, you don’t want to broadcast a persistent ID every you go and create a log on their systems of everywhere you’ve been. So is there a way to calculate the average speed of traffic on various segments of the highway without doing that?

And there’s lots of different protocols you could do if you thought of privacy actually mattering when you designed it.

Deirdre: Which the, uh, the government probably didn’t think is a concern when they deployed these things. Wow. Okay.

Nate: I mean, if you’re, if I was in charge of, of designing like this smart grid stuff back in the day, I might do something like at a control port, like a physical control port for a peripheral bus onto the, the meter or something like that and be like, okay. Once we decide how we’re gonna do control stuff, maybe people who opt into it will mail them something, you know, a little package that they can just go plug onto the side of their meter with a consumer accessible kind of thing.

It’s like, you wanna opt into this program of saving money, then plug this into the side of your house, kind of thing. You know, it’s very easy to do user friendly, and once you plug it in, you’re now a part of this thing. If you don’t plug into it, then you’re not opted in and someone can easily tell if a given house is participating in this remote shutoff kind of situation,

Deirdre: can only opt into this remote shutoff thing. You will not be deployed to you by default, uh, by the government.

Nate: right. And, and yeah, and, and turned on already before they figured everything out.

Deirdre: Yeah.

Thomas: I’m pretty sure. I’m pretty sure some of the motivation for remote shutoff is just minimizing truck rolls for actually shutting people off. So like, yeah, the opt-in thing, it addresses one of their stories, but probably not the real story they have. Right. Which is just, it costs us money to shut people off.

Wouldn’t it be nice if we could like have a cron job that shuts people off?

Nate: Absolutely. Yeah, no cost savings was, I think, the bigger reason for that. Not, not for load shifting.

Deirdre: This is Root Labs in the two thousands, late two thousands.

Nate: 2008, 2009.

Deirdre: Okay. Um, there was something you were working on about 2011 about some sort of mobile payment system. Can you tell us about that?

David: Some sort of mobile payment system.

Deirdre: Yeah. I don’t know what sort, there was some sort of mobile payment system. Um,

Nate: Well, it doesn’t really matter to specifics. It’s what matters is how, how it works. So I was, um, yeah, another vacation. I, it sounds like I took lots of vacations, but I really didn’t, uh, I was like, you know, driving down the freeway and I got a call on my phone. and someone was like, who I knew casually was like, Hey, we’re designing this new device.

It’s gonna be, you know, mobile payment system, it needs security. And uh, we need it yesterday. And I was like, Okay, yeah. Um, sure I’d be happy to help with that. And that ended up being the same weekend that the tsunami hit, uh, hit Japan. And, uh, so it was like very memorable to me. It, it was like, uh, being on vacation and hearing that, uh, tsunami warnings going off in California because of the, um, the sirens from that. Yeah. So that was a propitious start. Uh, but when I got back, I was, I talked to the company more and what they were doing was they had a design firm who was helping them build this new device and it’s gonna be a, you know, payment terminal and accept credit cards and all this. So they wanted it to be secure.

And it’s kind of interesting, some of the requirements for it. You know, they wanted it to use as little flash as possible, as little storage as possible to make it cheap. You wanted long battery life as well. Uh, so you didn’t have to keep replacing the battery on it. And also security wise, you know, you think, oh yeah, just encrypt everything.

But, um, actually with, um, payment system, sometimes you don’t wanna encrypt everything. Like, uh, for example, you wanna leave like the last four digits of the credit card number in the clear so that intermediate device can present a dialogue to the user and, you know, help them understand what’s going on, Things like that.

So they said, we’re gonna do this and it’s gotta fit in. I think the original requirement was 4K and I was like, Okay, 4K for what? And they’re like, Well, we’re gonna do like 1K for the card swipe handling. Like the, the logic for that 1K for AES, 1K, I can’t remember what the, the third 1K was for.

And then 1K for like, everything else. And I was like, So the, the encryption protocol? Yeah. Yes. The, uh, message sizzling. Yes. And so,

Deirdre: asymmetric crypto

Nate: Yeah, so there, uh, asymmetric crypto didn’t really fit in this. It was a, a really, again, a really small device, micro controller basically. And, uh, didn’t have any acceleration for, for public key crypto. So what we did was we were gonna build something off of symmetric crypto and there was some negotiation over which 1K bucket was gonna be holding which functionality.

And so the design firm, I don’t know if you’ve worked with design firms before, but they’re very good on like, kind of industrial design. They’re pretty good at rapid prototyping, but in terms of making something production ready, they tend to not be so focused on cost reduction and design for manufacturing, testability, things like that.

Deirdre: and long term maintenance of the code.

Nate: Absolutely. Yeah. So there was a bit a tug of war between us where I was like, you know, well you should really own this part of it. And they’re like, Well, this is, no, this is, we don’t have room for this because we got all this other stuff. So basically everything kept getting shoved into my 1K bucket.

Until I was left with, once the card had been swiped all the protocol handling for it, all the, uh, cipher mode stuff and, uh, a lot of the message swizzling and the error handling and things like that. So all that was in this 1K bucket and I, I was like, well, I think I can make it fit. But, you know, and AES itself was, was one 1K of it as well.

So we got down to designing it and working on it. And I really wanted to get this right once because it was gonna be, you know, millions of device recall if something was really bad about this, a big reputation hit and, and a lot of potential damage. Even, even if the actual risk to individual users of the device was low.

If, if there was a, a bug in the software.

So at the time it was very difficult to do, um, AEAD modes. There was OCB, there was like XEX, there was a few different modes, sort of the big giant AES modes, the two pass modes, and there was also, uh, one pass modes, which was OCB but that was patented. And so nobody really

Deirdre: was gonna say, like, I don’t, I don’t think you had OCB had quote unquote air quotes, OCB at the time, 2011 because of patents. You only, we’ve only got it in the past few years because it came out of patent.

Nate: Right. Yeah. So the company didn’t want OCB and actually the first thing I offered to them was like a tweaked XTEA mode. And the reason for that was to buy back some of that 1K from AES implementation itself. But they’re like, No, no, for standards reasons, it’s gotta be AES. And I was like, Okay, fine. So

Deirdre: And no GCM.

Nate: no GCM, no too heavy weight for a microcontroller.

Deirdre: Alright. Okay.

Nate: all the polynomial multiplies would, would be too heavy for it. So,

Deirdre: Okay.

Nate: For the, uh, authenticator. So I scoured the literature for what other modes can be turned into a one pass mode that’s also tiny to implement. And I came across this thing. I think it’s probably the only time or the biggest market use of this particular mode, I’ve never seen it used anywhere else, but it’s called CCFB

so everyone knows

Deirdre: heard of

Nate: CFB and CFB itself is not really often used, or even in the past it wasn’t, CBC was much more common, but cipher feedback mode, this one had a counter cipher feedback mode. So it was a combination of counter mode for encryption and cipher feedback mode for creating authenticator tag for that.

And um, it was a kind of a clever mode I thought. And I needed to tweak it to be really tiny, to fit the authenticated data portion because like I said, like the last four digits of the credit card number, you need to authenticate that, but not encrypt. And so CCFB was pitched as a two pass mode, but with like a wink and a smile because it could be actually implemented as a one pass mode, but they didn’t wanna look like they were infringing on patents.

So, so they specified it as a two pass. But if you just implemented the two passes, as in, in a single pass, you could, you could do that. There weren’t dependencies between the two passes.

Thomas: So like a one pass mode here, and maybe I’m just wrong about it, or maybe I’m demystifying it, you never know with me, but like one pass mode in an authenticated cipher, we’re talking about the two operations of first encrypting it, you know, transforming plain text, decipher text, and then also creating some kind of, you know, tag or mac or hash that goes, the end of that verifies that the, you know, that the whatever you’ve decrypted matches what was encrypted in the first place.

Right. And then one pass and two best refers to how many times like the cipher core or whatever cipher core, you know, there might be different ones for authentication and encryption, but like however many times that’s to go span the input, right? Like a one pass thing is simultaneously encrypting and authenticating in the same one cryptographic sweep across the plain text.

And a two pass mode is potentially, you know, going twice over once to encrypt and then once to create the authentication tag. That’s right. Right. I’m, I’m pretty sure I’m right about that.

Nate: Yes, that’s correct. At the time, there weren’t, uh, other than ocb, which is patented, there weren’t very many one pass modes. In fact, I think this was the only other one I knew of. So this one was explicitly in the public domain, which was cool. So I modified it rather than doing the, It did have a separate authenticated data pass, but again, due to memory constraints, I only had less than a kilobyte of RAM to work with as well.

So what I ended up doing was I actually ended up reusing the card swipe buffer as it went along for variable storage. So parts of the card data that were no longer used became counters and stuff, which is not a good programming practice, but was necessary in this particular environment.

Deirdre: Did you ever run in into any issues with that, or did you check everything? Yeah, I would just be so scared to do that.

Nate: Yeah, uh, I was too. I very carefully, uh, speced it out. I designed the code to be really, really, really small. Like not just small compilation, but small source code as well. And the messages were all fixed size. Everything was fixed size. Even if there was a few bites wasted here and there, by keeping everything fixed size, it avoided having length fields that could then lead to buffer, overflows, overflows, whatever.

And yeah, so it’d be like, you know, what’s the maximum, uh, name that compare on a credit card? Okay, this number of characters. So we’re not gonna try to compress it or throw away spaces or anything like that. We’ll, just like this field is always this size. So in order to convert this one pass mode into, to do authenticated data without this extra step it had for the authenticator, cuz we didn’t have time or space for that.

What it did was it just encrypted the authenticated portion of the data, the clear text data, but threw away the, the, the key stream for those, that part of the message. And, uh, it was a very small hack, but you know, you could prove that it degenerates back to the original case of encrypting it. Uh, but you’d basically exposing some plain texts and throwing away key.

So I did that in order to avoid temporary variables, did things like moving things around in the buffer by swapping repeatedly. So it’s like, reverse this string, reverse this string, and then swap them by reversing the whole thing. So I did stuff like that, like repeated reversals, and that’s kind of an interesting way to swap multi bite fields without actually explicitly swapping them.

Did some, some little hacks like that and then basically built

Thomas: That’s ho I just, the picture just crystallized in my head of what you’re doing there. That’s horrible.

Nate: Yeah, so that was to get the last four digits into the clear text authenticated data field. Uh, so I knew a fixed offset, like encrypt all these bites up to here, and then beyond that, don’t encrypt these bites and then pass off the authenticator tag at the end. And so all the plaintext stuff had to be gathered together at the end of the message so that we could just have a single switch instead of switching on and off between cipher text or not, uh, which would’ve made the code more complicated.

So yeah, so you do like reverse a part of it, reverse the other part of it, and then reverse the whole thing.

Thomas: What, like what horrible 1980s Dr. Dobbs nonsense is this? Where did you get that?

Nate: I just came up with it. Honestly, like, I mean there’s a lot of stuff of swapping via repeated XORs, for example, that’s used in in some kinds of things, and that that kind of stuff comes also when you’re doing hardware design. Like if you’re design FPGAs or ASICs there’s a lot of tricks you do like that because wires are free.

In ASICs, you’re just connecting one spot on the chip to another spot, so you can swap wires all you want to, and it costs zero resources when you’re moving data from one part of the chip to the other. So this was that kind of thing. But anyways, it succeeded. I was able to get everything into one kilobyte, all the message channeling the checks on the, the authenticator, the encryption mode, all listen to theirs.

I was really proud of that code. I built a fuzzer for it and shipped, like when I deliver , when I delivered to the customer, I’m like, Here’s the code that implements it. It compiles down into less than the kilobyte for the micro controller. You can also compile it in, uh, it’s C code, but you can compile on the host side and here’s a plugin API for using it to, to decrypt on the host side once you receive these encrypted credit card numbers.

And I fuzz the heck out of this thing. Like I fuzzed every bit of the message, you know, and all these different things. So, you know, I’m really reasonably confident with fixed size messages and even, even this kind of funky handling and stuff like that. This actually is correct in terms of the implementation, but it took a lot of time.

I mean, at CRI we would often say if you wanna have high assurance in a system, you have to spend 10 times the resources on verification as you do on designing a system. And so I really did put a lot of time into that as far as the verification stage. It wasn’t just like, hack out some code and there you go, ship it and I’m done with the project.

So yeah, that actually it worked, It shipped. I got the wonderful news at the end of the project that, well, actually the design firm couldn’t fit their stuff into their, their one kilobytes. So we’re going with the 8K parts of a 4K part, and I was like, I just tore my hair out over this one. K.

Thomas: It’s also a problem that no one is ever gonna have again. Everything now is just arm. Right. It’s very disappointing. I have a question like why the obsession with doing a single pass mode in the first place? Right? You’re just burning compute there. Not actually code space. Right.

Nate: well there’s a very limited amount of RAM that was part of it. And also there was a latency issue. So when you swipe a credit card through a terminal, you wanna be able to show the user immediately that you have a valid swipe, that you know their four digits of their numbers correct, things like that.

Like you wanna be able to present to ‘em that it succeeded. Otherwise they’re gonna try to swipe again. And so it would’ve taken a few seconds to do like a two pass mode. Whereas with the one pass mode, I mean already AES on an eight bit micro controller is pretty slow, just AES.

Deirdre: Huh?

Nate: I, I don’t know what implementation we ended up using, but it was not optimized for afit micro controllers.

So it was just one of those things where it’s like, okay, we’re running, I think as maybe 16 megahertz, I don’t remember exactly, 12 or 16. So everything was just like fit, fit it as little time as possible, latency wise, and as, as few resources possible to make the chip cheap.

David: And this was for swipes, not for chips at the time, I assume, right? Cause it had been for chips, apparently it’s okay to make people wait like eight minutes for a chip to go through, based on my experience, uh, uh, with point of sale systems.

Nate: I mean, sometimes people care more about the quality of things when you, when it’s the first version of it, you know, it’s a reputation thing and it’s like, then you get focused on cost and things kind of have a race to the bottom in terms of quality after that.

Thomas: Well, like the chip reader also takes custody of your, of your card while it’s doing the thing and then tells you when you can have it back. So you have like a, you have, you have a lot of leeway there, right. But when people are swiping, they, they can autonomously swipe over and over again until it never works.

Deirdre: Huh?

Thomas: I’m developing opinions about latency and, uh, payment card systems. Now, I didn’t, I didn’t, I didn’t have ‘em before, but now I do. My opinions are that David is wrong.

Deirdre: Um, I think you said that they needed to use AES for reasons, for standards reasons, but I think I was just double checking that Chacha Poly, which was supposed to be much nicer for software only implementations of an AEAD was available at this time, but they just didn’t wanna use

Thomas: not on an eight bit micro, right? Like you’d still be, cuz Chacha wants a multiplier, doesn’t it? Am I wrong about that?

Deirdre: I forget, I dunno,

Nate: I don’t remember either, but there was much better choices nowadays. So by describing it this way, I’m not saying this is the recommended way to do it now, but

Deirdre: Oh, sure, sure, sure. But yeah, I was just sort of like, oh, if you’re doing like a, a lower capability device, like the thing I would reach for first would be Chacha Poly, just because like that. It’s explicitly supposed to be for, uh, lower power, lower capability devices as opposed to to AES G C I, which is sort of like the gob standard for a device if you just need an AEADs.

Now, although we will have other people be like, that is like fragile mode that you should use OCB or you know, whatever, the less difficult to implement, uh, less likely to break on you, uh, AEADs mode now

Thomas: Those, those people are right too. Right. But like the big, the big distinction between Poly 1305 and uh, and GCM is that you don’t need a care like to, to get like a constant time implementation. You don’t need a careless multiplication instruction for, for Poly 1305. And you would for GCM,

Deirdre: I think that’s right. That sounds correct,

Nate: I mean, because of PCI people wanted to use AES so that they would not get flagged for a non-standard crypto and all these other things. Even the mode I was using was something that caused some pause of like, why isn’t it GCM or why isn’t it, you know, AEX or whatever it was, you know,

Deirdre: It’s like, because you only gave me 1K, like Jesus.

Nate: Exactly.

Deirdre: If you want the boring crypto, boring crypto, according to the PCI standard, I need more room.

Nate: Right. Exactly.

Thomas: So if you were going to add some, if you were gonna for some reason add symmetric cryptography to Apple II Karateka, what would you have used instead of AES? Would it have been a TEA derivative?

Nate: Yeah. I mean that was my first inclination was XXTEA or whatever at, at the time. And possibly making it a little more robust by doing like a multi key combination of it.

Thomas: now you’d use NSA Spec

Nate: I don’t know. , that’s a good question. I’m not sure exactly. I mean my be, if I had unlimited resources at the time, uh, public key crypto obviously would’ve been the choice, not the symmetric key system.

Cuz when you have symmetric crypto only on a consumer device that’s in the millions, you end up with a key management problem. And so that was a whole separate part of the design, which was like, how do you do key trees for both in a factory being able easily create the right key for the right device.

Uh, so they each had a unique key. And then do the production step of, of personalizing each one with this key. And then also revocation if you need to be able to say, you know, this terminal’s been compromised, we’ve caught it, sending spoofed credit card numbers or whatever out of it. Don’t accept messages from this thing ever again.

And so that, that kind of stuff happens on the, the production side and the server side. And it’s definitely important to think about that.

Deirdre: Was there anything on these small devices that had to remember, deny listed connections or keys or anything like that? Okay. Good.

Thomas: No,

Nate: No, not, not necessarily, but it, it’s kind of funny when you think about what are the attacks, if you’ve got these, you know, cheap payment terminals all over the place and people can, can swipe a card through it, like what’s actually gonna be happening when it’s in the, the real world. So people could take them apart, extract it the AES key from their device and create a full clone of it and lie about it.

Inject all kinds of credit card numbers off of dumps they found on the internet into this thing and try to get money out of it. But that’s not, not likely to happen because it’s very easy to revoke that number. And also you see like 10,000 credit cards come through one terminal in a day or something like that.

You know, probably something weird’s going on. There’s a lot of stuff you can do on the physical side as well, because when people swipe things, it’s human driven. And so there are natural variations in the timing, for example, of like nobody swipes like a robot,

Deirdre: Yeah.

Nate: If you do it right, you get a one kilohertz tone through it.

Uh, as far as the rate, you, you wanna have a solid one kilohertz, uh, swipe rate, uh, which is an audio band. But, but most people don’t do that. So you’ll, you’ll get like wow and flutter from old tape systems when people slow down and speed up as they swipe or whatever. And so from that kind of thing, you can actually know a bit more about whether something’s legit or not.

But yeah, that’s the, the kinds of attacks that were kind of interesting to think about. Were, are people gonna use this to launder credit cards that they’ve compromised through other systems through this to try to extract money from it? Are they gonna try to like, use it to like double spend stuff like, uh, benefit cards or some kind of like, uh, coupon codes or whatever, like gonna be things like that.

Are people gonna like, try to use to repurpose this, buy a bunch of these, like the QCAT scanner and then reuse it to launch their own business of some kind by, by, by cloning it and saying, We’ve got these things for cheaps or free, you know, like, so let’s just pick up a thousand of them and then start our own competitor.

Thomas: Nate and I have friends who’ve done things like that, which is why Nate’s concerned about it. So,

Deirdre: I see.

Nate: We have a friend that like with this first million dollars he made or something like that is amazing. But his first million dollars, I think he purchased 800 mirrors for his house just because he could. And uh, So he just had mirrors everywhere?

Deirdre: Just to look at themselves.

Nate: Nah, I don’t know. I think he’s just like, Oh, I could buy like thousands of these so I’m gonna do it

Deirdre: right.

Nate: of us stop right before that, so I’m gonna do it part, but this person like just goes and does it.

Deirdre: that reminds me I need to get a mirror for my wall Um,

David: The founder of Justin TV bought like three or four massive like sculptures of dragons or dinosaurs or something that he just left in his garage and was like, Well, I guess we could do this.

Thomas: This is before there was Alibaba and like the joke got old because there’s really nothing you can save. You can’t just go type a query and buy a million of right now like goats or, Yeah. So you get the payment card thing. You’re uh, you’re doing consulting stuff. You didn’t stay at, uh, just consulting for that long.

Right. Like I remember talking to you during this time, right? Like you have a startup after this.

Nate: Yeah. Uh, something that CRI did really well, which I was impressed with and I always wanted to do myself, was also, uh, was to use consulting as a way to sort of pay the bills and do solve fun problems for people, but also like to get ideas about a startup or, or, um, things that it could design.

Thomas: This is the kiss of death here, by the way. Like Tom and Nate. Tom and Nate’s startup advice.

Nate: It. It’s not good general advice for startups to start with consulting, but it has worked for some people I know and myself, and it’s not that you’re getting ideas from your customers and then you’re building a competing product or something like that. It’s not like that. Instead, it’s you, you look for unmet needs.

It. I would sell this to all of my customers if I had a product that could do X. And nobody creates this yet. And it’s really fun because, you know, YC uh, went through Y Combinator later on and they tell you to like, you know, talk to customers, talk to customers like repeatedly, and talking to customers is great if you have a reason to talk to customers or are good at it or have an in or whatever.

But consulting is a great way that you’re forced to talk to customers and forced to pay attention to their needs and problems because they’re asking you to solve some problem that they think they have. And it may not be the actual problem that they really do have, but that’s sort of where the insight comes from, is being able to be exposed to it.

Whereas if you just showed up off the street and said, I’m a startup person, I’ve got an idea how to solve your problem, and that they’re not gonna always tell you, Oh no, our problem is actually this other thing. So yeah, in 2011 I had been doing a lot of reverse engineering, so the work on embedded systems had kind of led into more reverse engineering on the software side.

And I was doing actually kind of bulk reverse engineering, I call it. Like I’d scripted a bunch of stuff and I was helping a company with, I was looking for like say GPL license infringement and other kinds of, of license infringement where it’s like, we’ve got this thing and it’s GPL, but people are putting it in their devices and then selling it and they’re not paying us royalties and we have a dual license, you know, for it.

So we wanna go after them for gpl, for taking the open source version of it and just not paying us. And so I did it by hand a bit, you know, Ida Pro and, and scripts and things like that, but got really boring really quickly. So I was like, okay, if I could dump the firmware from any device and then dump them into some kind of system that could tell me all the code that was inside this thing, you know, we’re talking relatively large stuff.

Libraries, not like single functions. If I could tell me all the libraries that are, that were used to build this thing with some confidence level, then I could make this self-service. And then the customer is then just like, Uploading a bunch of software into this web service or whatever, and it’s telling them, you know, Hey, I found these four libraries in it with 90% confidence or whatever.

Then you could tie this into their royalty system of like mailing letters to the company. It’s like, type in the address of the company where you got this thing and, and we’ll send them a letter that says, you know, hey you GPL infringed our thing, you know, here’s the code. Maybe even send him a nice little comparison of the, the flow graphs of the two programs or whatever to say, Hey, here’s how the evidence or whatever. You know, pay up or whatever.

So I thought that was kind of cool. Always been a fan of, of Halvar, really smart person. And, and um, you know, the bindiff work was pretty amazing. So I was like, well, bindiff is great at finding like patches that are missing in things. But most of the tools that you see for reverse engineering are human centric and they focus on finding a needle in a haystack.

It’s like, you wanna find this one comparison instruction. Is it assigned or an unsigned comparison? Because otherwise there’s a vulnerability here. And that’s one important problem to solve. But there’s also a different problem to solve, which is like, I’ve got a million pieces of software. A person can never look at them, Can you with some threshold of confidence, tell me high level things about this software such as does it contain this library or not?

To some level confidence regardless of what obfuscation, someone’s applied to it. Um, like obfuscation tools, again, not individual recoding something by hand or something like that to say that does this basically look the same thing? So I started trying to build that system and uh, originally it was focused on this problem of like, go after customers that are licensers technology firms that are licensing stuff and help them make more money basically off of their portfolio.

From companies that aren’t paying them. So I got that going. And it turns out though, that there aren’t very many firms that actually want to pay for that kind of service. There’s a bunch of companies that do like open source license management stuff like black duck software and things like that, but it’s all source code based, not binaries.

This is all based on binaries and there’s some some money to be made there, but although it’s a relatively small market, but there was not really a huge market of people that were like, Yeah, just, we’re just wanna feed binaries into this thing and tell me who, who to send lenders to. So there was a while there where things were not going well revenue wise, and I was trying to pivot and I was like, Okay, what do we do next?

We’ve got this capability. Actually even had come up with some cool ways to do like at scale binary analysis and uh, similarity analysis as well. And I was like, Okay, what other problems can we solve that might, might actually be successful? And um, at the time the app stores were kind of taking off. This is like early 2010s.

So it was like by 2012, 2013 timeframe, I was pivoting. And it was like, okay, app stores are taking off. There’s millions of apps now. And if there’s millions of apps, what things can we tell people about all these apps? Because it’s a black box, especially on, on, on iOS, where you don’t even get to inspect the binaries in most cases.

And so we built a system that would crawl app stores. We were crawling the iTunes app store, Google Play, a bunch of Chinese Android stores and just, um, by this time we, we, what we had was we had intermediate language, we’d, um, air quotes, decompile, uh, it wasn’t full de compilation. It was more of a, a rough translation into this higher level ir.

Thomas: Cool kids would call lifting, right? Yeah.

Nate: Yes, yes, exactly. And it didn’t need to be precise. Again, it wasn’t like the point of it was not to be precise and do lots of crunching on it with lots of CPU cycles to tell exactly what it’s doing. It was just like, do you roughly have these kinds of data flows or these control flow combinations of things?

at a high level and then identifying libraries and things. So it’s like, okay, we, we’ll just tell like every SDK or or library that’s compiled into these apps. Cuz in iOS who was static linking for Android, a lot of times it’s dynamic linking, which was like trivial. But then it was kind of like, okay, how do apps compare across platforms?

Like how do Android apps by the same vendor compared to the iOS apps and what insights can we gain? And it started taking off, actually we started getting big name customers like Google and Facebook, and they were really interested in our data because we would have like high numbers of apps going through these things.

You know, thousands and thousands of apps we were analyzing and we were pulling down updates as soon as they’re published. Uh, updating, sending people differential stuff like, Hey, this library just got added to this app. Someone added in a new analytics library to it. We were telling people trends and we could say like, Here are the top libraries for games.

You know, people are using this Unity 3D Libraries versus, you know, Unreal Engine or whatever. And, uh, that started really taking off. It was fun. And, um, we joined Y Combinator kind of to see like, can we get some actual funding? Cuz this is all bootstrapped off the consulting revenue originally. And we were profitable, but not hugely so.

And so we went to Y Combinator and worked on turning this into a bigger business. And at the same time, I, I came up with an idea that was this I really proud of as like a, like a business model, even if it’s not technically that that interesting. But it was like, we’re crawling all these apps, we got all this data on them, what things can we also reuse as in a different market?

And so at the time, again, like developers didn’t really have good views into what libraries they were even including their apps. Like sometimes a developer would be surprised when you tell ‘em, Oh, you’ve linked this and it’s like, Oh really? You know, it’s like, oh yeah, because we linked this in and it pulled in this other library’s dependency.

And so they’d have, uh, there’s a crash reporting library in iOS that you’ll have like three or four copies of the same crash reporting library because every analytics library also links against it and it’s statically linked. And so you pulled in multiple versions of the same library and it’s terrible code bloat, but

Deirdre: For, for Rust users, right now, if you wanna avoid this, there’s a tool called Cargo Deny that lets you eliminate conflicting different versions of a deep, deep dependency. So

Nate: It’s an amazing future. I like it. But yeah, this is like terrible. Like it was like you’re getting static blobs dumped on top of static blobs, all with depending on the same underlying things, but different versions. And so this is obviously a security problem and there’s a security supply chain problem there as well.

So it was like, well, we can resell this data to developers. If we just go after like the most recent CVEs or whatever in mobile apps. We know what the most popular libraries are, so we can say we’ve got this cve. You’ve got this library that’s in 20,000 apps, notify 20,000 developers. You all have the security problem to update your dependency of whatever it is.

So it’s like, oh yeah, someone would probably pay five bucks a month for this, or 10 bucks a month for this notification service. So what we did was we just, again, we ranked CVEs by number of apps that have that library to find the ones to actually build signatures for. And then we took those top CVEs and, or, you know, patches.

Even if someone didn’t even have cvs, it’d just be like a, a branch on GitHub or whatever. Cuz again, security handling across open source libraries isn’t always the greatest.

Deirdre: no

Nate: Yeah. . And so we just like, okay. Let’s create a signature for this, dump it into our latest scans of the app stores, and they’ll be like, Okay, there’s 20,000 apps with a thing.

And we’ve created a UI for people to just like, this is like a weekend project of like, you could go in and enter the apps that you use and it would tell you which things were vulnerable to which problems. And we dumped a couple different bin bugs into that. And then we said, you know, hey, if you, if you sign up with us, we’ll give you advanced notice of these kinds of problems as we find them in the future.

And so we got, I think in a week or two, we had 3000 developers actually sign up,

Deirdre: Yeah.

Nate: yeah, for monitoring their apps.

Deirdre: Good for them.

Nate: me was, yeah, I mean, going from like, you know, tens of customers to 3000 instantly was kind of cool even if they weren’t paying for anything yet. And that was kind how we got to the scale and got our initial seed round.

And, uh, I think, I still think there’s a, a really big market there for helping developers with insights to their code. I mean, they’re just like GitHub Copilot for example, and other things like that that are. They’re helping people build stuff, but also just giving people insights into the things you’ve already done.

It’s like, Hey, you built this app this way. You might have these problems or someone’s watching over your shoulder to make sure if there’s security problems in the future, we’ll let you know.

Deirdre: That’s wild because you built. You built something that’s not, you know, a hundred percent every time it’s not supposed to be foolproof, but you’re literally looking at the binary and, you know, creating this, you’re lifting it to some other representation, which is theoretically even better in a certain way than like, I have a package dot j or I have a cargo.com.

All, I have all my dependencies listed. But like the end of the day, you are the, the blob that you’re trying to execute is the source of truth. And like nowadays, you’ve got Dependabot, which is now owned by GitHub, and so like, it’ll do a lot of those things that you had to build from scratch, but it is depending on an encoding of all of your dependencies all the way down.

Um, if you don’t have that, you’re out of luck. Like if you’re just writing C and you just stick it on GitHub, it’s just sort. you, you don’t get no. Depend upon how, like we would need a source dna because like, there’s no way to just like go look up those dependency versions and associate them with the CVE.

And I don’t think they do any, They rely on like the Rust security advisory project or whatever, which like humans filing like security alerts against versions of crates and things like that. Um, whereas you were doing it all by hand. You weren’t even doing cvs. You were just like, this is a bad branch of this library that someone has compiled into their project.

You might wanna fix that. That’s a lot of work.

Nate: it was.

Thomas: There’s also like the, like the advantage of looking at the binary, you can also tell if code’s actually being used, um, like 99% of depend about alerts are useless. Right. They’re like, you know, JavaScript prototype injection problems and JavaScript that you don’t even use. Right? But if you’re looking at the binary, yeah, you can see if it’s dead code or.

Deirdre: Yeah. Yeah.

Nate: That was one of our competitive advantages actually, when we were doing the analytics data, was this, a lot of this is by hand. There’s a lot of grunt work that you do by hand to get things off the ground, but by hand, we identified what were the important public APIs of all these libraries because library isn’t just like one function that does one thing.

It’s usually like a collection of things. From larger companies like Google and Amazon, they offer entire SDKs with like maps and game analytics and whatever, all the stuff thrown in there. And it’s like, okay, which parts of the Maps SDK is this app using? Is it using the fine grain location tracking or is it using like geofencing?

And so we would categorize and hand label some of these different APIs and we knew which APIs were important because again, we could sum them up across all the apps in our collection and say, Okay, what are the top five functions that are called from outside of the library, but from the apps itself, uh, across a huge collection of apps.

And then we’d say, Okay, let’s hand label those top five functions as you know, geofencing or whatever the functionality was of that one by looking in the docs. And then we could tell the marketers or people that were subscribing to this data feed. Which parts of Google APIs are taking off this month, and which parts are being let used less and being replaced with competitors’ information or competitors’ APIs.

I mean, so that, that was really interesting. And actually that’s something that I, I’ve always loved as a business model, which is take a whizbang technology and give it to the least technical people in a way that they can use it. So in this case, it’s like, give marketers Ida Pro at scale, And it’s like marketers are people that buy marketing data.

Like they, they’re never gonna be able to learn IDA Pro and be able to decompile an app and figure out what it’s doing. Um, and certainly not at scale. And so, but if they had that data, they can make really interesting decisions about how they, they change their competitive strategies or their marketing strategies.

Deirdre: Yeah.

David: Yeah, so around the time I think that, that you had had exited, um, was when we were starting to try and figure out how to spin Censys out into a startup. And I remember I passed through Illinois. Um, and so I, I talked to Thomas and I was like, Well, we got like these network insights, but they’re like, good at some stuff and not other stuff.

And no one really wants to pay for like TLS vulnerabilities, but it seems like, knowing what’s on your network and like what server software would be good. And oh, by the way, has anyone ever thought of like bulk reversing a bunch of things in the app store and saying what libraries they use? And Thomas is like, Well, the good news is you hit the nail on the head on that one.

The bad news is you should have met Nate a few years earlier. Um, and I am kicking myself for not joining his company

Deirdre: was that you David, or was that Thomas?

David: that that was Thomas for his said he was kicking himself for, for not joining Source DNA um, and my claim to fame is I came up with the idea for Source DNA independently five years late.

Deirdre: There you

Nate: No, that’s great. Yeah, and it’s, it’s all, it’s all different iterations of the same thing. I mean, we had some defic deficiencies in terms of, there was one app where they had backport a fix, like just a point fix for a specific problem without backporting the entire point version of the library. And so we identified it, misidentified it as, or correctly identify, I guess, as the old library.

Because we didn’t look for this one branch that had changed. And so it’s like, yes, if you’re trying to prove that this one app is vulnerable or not, we’re not the tool for that. Like we don’t solve that very well. But yeah, there’s, so there’s ups and and downs to each of these approaches. But you know, when people go and try to attack this same problem different ways, you, you, you can take different angles at, And the one, the one I thought was kind of the unique insight for us was that if you’re doing things at scale with lots of different data, your approach to it is very different from the traditional reverse engineering route where you’re with like, with microscope and tweezers, with a single binary, like the traditional reverse engineering.

But like counting the number of function calls from a labeled API cuz it’s, you know, you got a symbol for it or whatever like that from every bit of code, which. Most unique possible in, in that app, you can kind of find the boundaries between APIs and libraries because in the library things will have kinda the same type signatures, the same, same set of code.

You identified that and it’s like, okay, if I’ve got a caller who only appears in 10 apps, that could be a wrapper library that this developer uses around this thing. If it’s from a million apps, it’s a public API because nobody owns them at all million apps on the app store. So you can kind of do these things in aggregate or statistically that otherwise would require really sophisticated

Thomas: First of all, I like how when Dave retails anything I’ve done, I’m a 1950s movie producer. Hey kid, the good news. Chomping out my cigar. This is like a thing with me though. Like the reversing thing here is, um, I don’t know, I spend too much time on message boards, right? But there’s like a, there’s a common perception among technical people of what serious, serious reverse engineering looks like.

And that perception is Nate pretending to take a vacation while reading assembly with a, you know, pen and paper, right? But like in reality, like everyone’s using tools, everyone’s looking at higher level abstractions. Like you’re not, like, you don’t start at the first line and then read a straight line all the way through the code, right?

Like I, I remember like, like finally getting this through my head, like in the mid aughts with, uh, I think it was Pedram Amini, the pme or whoever did Pai Mei. Apologies to whoever did Pai Mei. But that was like, um, it was like setting debugger trace points on every, you know, at the beginning of every basic block in a program, which is trivial.

It’s just a loop around. Sounds complicated, but it’s not complicated at all, right? Like, so you take the assembly, you, uh, you mark every basic block, and then you run to get a baseline trace of which base of which break points get hit, and then you like rerun it and then do something different with the application.

Like send a message or push a button or whatever, and then you just see which different things get lit up, right? Like, and, and right away you’ve gone from the entire, you know, disassembly of the program to like a tiny set of functions that you have to go read, and you can really quickly get a sense of the structure.

And that’s a simple idea, right? But like, if you’re looking, you know, across every application on the app store, there’s like a million other things you could probably come up with to get, like structural stuff from, and that that’s before you get into the fact that you’re, like, you, you’re not looking at raw assembly, usually you’re looking at the lifted assembly, which is, you know, much higher level than the raw assembly and stuff.

But I just, I feel like people have this general idea. What binaries are doing is unknowable, that you really need source code. That source code is critical for security and like it’s, it’s better to have source code than not have source code. Like, I’m not, I’m not making that crazy claim, right? But there are ways in which binary analysis is superior in source code, and it’s certainly not as opaque as people think it is.

Nate: Absolutely. Yeah. On the hit tracing, one of the cool things about it that a lot of people didn’t realize who were thinking about it in the abstract is that once you’ve had the initial trace, you remove those break points. And now that the binder runs a lot faster, and a lot of people thought, Oh, it’s too slow.

You won’t be able to set break points on every, every basic block. Um, but once you’ve removed those, all you care about is if any remaining break points get hit. That also goes to a reverse engineering process I like to apply, which is probabilistic. So you just set random break points in the binary and count how many times they get hit.

It’s kinda like how profiling works when you’re using like G Prof or whatever. It’s like you just, just randomly sample the call stack. And soon you get an idea of how the program’s working, where it’s spending most of its time when you’re doing certain things where, where it does those things. You know, I’m working, interacting with UI it’s in this area of the code and sending network traffic, it’s in this area of the code. And so a, a lot of this stuff can be done in a kind of sloppy manner at first. And then once you know the high level of where you’re looking for things, you can zoom in on the parts you care about.

David: So what did we learn from all of this

Nate: Well, I mean, uh, as far as cryptography goes, we were talking before about cryptography being brittle versus robust and how you, how you design it. And I always like to think of the different design principles that you apply to the whole system to avoid having exploits or, or avoid having additional complexity that you have to deal.

And for example, consider just like a simple web cookie, HTTP cookie. You could have one where it’s encrypted and authenticated and you send data to the client. The client sends it back to you, you verify it before you trust the data, things like that. Or you could have a PRNG that spits out opaque IDs and you just send opaque IDs.

So the client, you store everything on the server side and a database or wherever, and you look it up by opaque id. And the latter doesn’t sound very good, like it has additional server resources, you know, all these things like that. But depending on the system and how many users you have or, or the, the characteristics of it, it may be worthwhile for the security tradeoff because given an opaque id, the attacks against that are maybe a small number of attacks against that.

If your PRNG’s weak or your entropy’s bad or whatever. But you know, if you have an encrypted and authenticated cookie, there’s many, many, many, many, many more ways that can fail. You can wrap counters and Thomas’s favorite attack. You know, you can have your side channel attacks against the key used to decrypt the cookie.

By, by doing time, time attacks or whatever, there are attacks against like padding oracle attacks. You know, like all these different things suddenly are in within scope and now you have to know a lot more about crypto and rule out a lot more things in your design implementation. So what, what I’ve always advised my clients and people I’ve worked with is if you can avoid crypto altogether or maybe just PRNG or something, you know, that’s way better than designing a system that has crypto and then having to verify it to the nth degree to be sure you haven’t created new flaws.

David: Or even worse, uh, building a system that is really just doing something like the database lookup except then putting it inside of a JWT anyway, where you’re like using cryptography but not actually even getting any value out of it. That’s something I caught in a code review once where I was like, We have a JWT here, but actually you’re just using it as a lookup ID, like delete all the crypto.

Nate: That’s why I think all cryptographers, or at least cryptographic engineers should do consulting for a while. Because when you learn this stuff just from the, the books or looking at implementation to things that are good, you think, you know, okay, this is kind how to do it right. And you kind of get a sense for it, but you don’t realize how bad it can be when people do it wrong and

And so Cryptopals is great for showing people attacks, but what’s worse is like Cryptopals is like showing one type of attack against one design. Like each system only has, I think, one flaw in it. Each chapter or whatever. It’s not like you hit a system and it’s like, it’s got this flaw and this flaw and these two flaws that combine with each other to make it way more exploitable than you’d be if you only had these flaws.

And every time, uh, I would talk to people, people like Thomas, um, saying what he had found in some. I would just be flabbergasted because I’d just be like, Why would someone even do that? Like if you asked me to come up with the worst way to use this particular protocol or whatever, I couldn’t even imagine my best imagination how to do it that poorly.

So it’s, it’s, it’s just astounding. So, yeah, so any kind of practical experience with actually reviewing fielded systems and the things people are cooking up is much better at understanding the underlying primitives.

Deirdre: Hmm. I would push back in this sort of new world where. Trying to design things with security, of course, but privacy by default really starts pushing you into a different world where like control over your data and like consent to what you do with your data, uh, where you are just a human user probably necessitates cryptography in one way or another.

We have a lot much nicer libraries to use and like kind of years of lot less fragile constructions to use. And like a lot of people can, if you’re doing messaging, you can slap signal on it or now MLS and like there’s a lot of things that have been kind of tried and tested in various systems. You can slap a this on it and maybe we’ll get to a world where fully homomorphic encryption makes end to end encrypted queries like scalable, private, and performant.

But how do you feel about that sort of world where you’re like, you’ve seen how a lot of things can go wrong, but like if we’re trying to move into a sort of a world. It’s not just the big Google or the big software as a service provider who just has access to everything that you ever do on their software service, because that’s just the way you build things.

We’re trying to move away from that. Um, well, I would, would like to see the world move away from that. Does that make you scared or does that sort of think, do you think we’re kind of in a better place that lets us able to do that in a better way than probably we used to be able to do that?

Nate: I mean, I agree with you that I think that’s a good thing. For example, just even providing users data export, and then also making sure that the data export conforms to, first of all, all the data that the provider has on you, but also kind of the way they interpret that data gives you a better sense of what they’re doing with it and, and you don’t necessarily know everything they’re doing with it.

It could be leaking it somewhere else or whatever. But if someone has a record that they send me back in a data export that says User ID and it’s my Social Security Number, then that’s, that’s a really bad sign, right? Like that, that like, uh, they’re probably sending this number all over the place and logging it all over the place and that’s not good.

So, but any kind of visibility and transparency you can give users for how the service is thinking about the data or the designer of the service thought about the data is definitely good. So it can help reveal these kinds of mistakes in, at least in the format. But I think cryptography is important, and I’m not saying don’t use it, um, but just be prepared to invest the resources to do it right when you do need to use it and don’t just use it because it seems cool or block chainy or whatever. It’s, uh, you know, JavaScript crypto is kind of one of those things where in the browser where it kind of comes back to like people like, but it wouldn’t be cool if? It’s like, yeah, but you don’t have the root of trust there, you know? So, and there’s many

Deirdre: really cool, but

Nate: Yeah, yeah. But let’s live in the real world where this actually, the browser actually operates a particular way.

And, you know, scripts are composed on the page as a particular way and the same origin and all this. And let’s think more about the users and, and what they’re entrusting you in that system and not how cool it would be if, but yeah, I, I think it’s great that with these new things coming on online with being able to have, uh, circles of trust threshold, cryptography being used more for, you know, services, being able to say things like, If two of my friends want to recover my data after 20 days, I’ll let ‘em, or something like that.

Like those kinds of things that just didn’t exist before. Are there in, in terms of security in general. Uh, for example, on iOS, knowing that when you uninstall an app, it’s actually gone. Like, this thing can no longer run, it can’t leave any hooks in the system, whatever. That’s that kind of capability for even just end users is, is, is amazing to have.

And I wish more systems had that kind of thing. Like when I. When I say delete my account on a particular web service, did it just mark a field in a record in a database somewhere that’s gonna be backed up for 10 more years? Or is it actually, did it wipe things? And if it did it wipe it seven times with a DOD method or , you know, did, is it on a flash drive somewhere where it just, again, in, in, uh, the file system just marks something unused and maybe it’ll eventually get recycled?

Deirdre: Or maybe they encrypted it to hell and they threw away the key and they, they, they actually overwrote the key and it is like theoretically impossible to recover that data, so,

Nate: It’s very hard to overwrite a key on flash, generic flash storage, uh, in terms of load balancing and stuff like that with, with, uh, right. Balancing across the different pages. So you actually have to design carefully how the underlying hardware storing things to be sure that it’s a, and you can actually, when you wipe something, it truly has been wiped.

So again, that goes back to the dependencies when, when someone just says, Oh yeah, we’ll encrypt it and throw away the key. Like, at what level of assurance are we saying we’ve thrown away the key? At what point do we know it actually has is gone, and to what level is it gone? And I think a lot of people don’t think about that.

Thomas: This is what I think of when people say like, we’re gonna do a completely open Linux phone instead of an iPhone or a flagship Android phone for security. Um, cuz it’ll be Linux and completely transparent. And I’m like, did you do the work to make sure that you could actually erase keys? Cuz probably not.

It’s like you’re, you’re just creating BlackHat talks for two years in the future if you’re successful. So,

Nate: And, and, and this comes in with like Spectre for example. Um, with Spectre attacks, people are like, Oh yeah, is zeroise the key in RAM and, and then it was like, okay, oops. My memset was getting optimized out by the compiler because it was dead code at the end of the function. Okay. So I’ve kept memset in memory.

Oh, great. Now I am zeroing from RAM. Oh, did I zero it from the cache? Well, I can’t talk to the cache directly. Okay. Did I get rid of all the data dependent branches based on this, that when I zeroized it. Uh, did I clear registers even like, you know, I, I loaded a key, It did my memcmp that was comparing the secret to something, this timing, safe memcmp, let’s say.

So I wiped it from RAM but did the memcmp leave the, the value, value is comparing against in registers, general purpose registers and the next function returns value to the user or makes ‘em accessible if the user can save force or register, save before their code runs. I mean, it’s just like so many things like that that are, when you think of all the way down to the gate level of hardware and the things that are happening there that as a designer, you have to control again, if you’re gonna offer high assurance

Thomas: This is why, by the way, this is why I’m such a weirdo about crypto vulnerabilities. Like, again, I’m not a, I’m not a cryptographer, I’m like a vulnerability researcher that’s just kind of has a weird interest in cryptography, but it’s like I’m obsessed with bug classes, right? Like, I’ve always been like, it’s like, you know, it’s much more interesting to have like the blueprint for a, a bazillion bugs than it is to have just a one bug, right?

I just, I just remember being a teenager and then working out an exploit, then finding it somewhere else and feeling like completely all powerful because I was a terrible nerd. But, um, but like cryptography is like that. To several orders of magnitude, right? It’s like it’s not enough to make sure you’ve counted the bites properly.

Like you have to, like, you have to understand the micro architecture and you have to understand all the different ways that attackers can influence the state of the system to bring up a pathological condition where it’s like in normal, like we’re still working on memory safety in normal system security, right?

Like that’s still kind of cutting edge, getting things built in a memory safe language, which just comes down to counting things properly, right? Like in the actual instructions at architecture. And cryptography, it’s like the, the, the complexity of the constraints that you have to enforce are so much more interesting.

I wanna believe that we haven’t even scratched the surface yet. Although if you look at the last, you know, two years or so of vulnerability research, like, we’re not like kicking down the doors or anything like that with crypto vulnerabilities. So I’m like, we, we need to pick it up a bit. But, um, I’m hoping to see a lot more of them.

David: I’m with you that buffer overflows are, are counting things. But I think use after free’s is a much, uh, bigger problem in practice and high value software and a much more complicated problem than just counting things and possibly given the NP hardish nature of the SAT solver that gets shoved into every compiler for a type safe language or, or memory, safe language.

Um, probably, uh, uh, in terms of computational complexity, at least, kind of similar to some of the stuff we’re looking at in, in crypto.

Nate: I mean, what’s coming next is how can you prove that your new processor is safe against Spectre and Meltdown kinds of vulnerabilities. You know, we saw SGX fall, for example, and Intel abandoned. Yeah. CHERI’s great. Uh, I like CHERI. Uh, C H E R I. Yeah. But yeah, it’s, it’s, it’s one of those things though where you, you start looking at the whole system again is, you know, system on chips as well as really complex processors that are almost system on chips nowadays for even desktops.

And it’s, it’s like, which cryptographic primitives. Um, you know, Dan Bernstein had a big focus on designing his cryptographic primitives to not involve data dependent branches because he was concerned about timing attacks and site general in general. But it’s like, um, when you’re designing new cryptographic protocols and you’re designing new processors that are gonna run those protocols. How can you do both in a way where there’s a, a nice composability, where the protocol fits well with high performance caching and branch target things, but maybe there’s hints from the compiler that tell you things about what you should or shouldn’t store in your buffer, your LRU or whatever. So interesting to see where that goes.

Deirdre: it’ll be interesting, although I know some computer architects. They don’t talk about making theory, their architectures, micro architectures or designs any nicer for building good cryptography into them. So I’m, I’m not very hopeful. And they’re, they are very, um, blase about the latest and greatest micro architectural attack.

They’re basically like, Oh, you looked under the hood for the first time. Did. Look at where all your data is leaking all the time. Did you think that you got those very fast comp, uh, computers for free? Did you

Nate: It’s scary. Yeah.

Deirdre: Yeah. It, it’s scary, but at the, Yeah, like, this is very interesting because like, I totally believe you that like, if you really want high assurance security of your software, let alone your cryptography, you basically have to nail down the piece of sand that’s actually executing your operations. And I believe that thoroughly.

And yeah, we write software that we’re just like compile to wherever, like maybe you have a, a JIT and we’re expected that to operate over keys like in JavaScript for example, or WASM or whatever it is. Um, and then we might have ARM an ARM just runs everywhere. It runs on a whole bunch of devices that will execute ARM assembly.

like, are we fucked? Basically like is software fucked? Uh, if we’re just trying to, to handle this as best we can in software? Or are we really, do we really need to just like control our hardware and everything that it runs all the way down for any sort of assurances?

Nate: Uh, I’m probably not the best person to ask to answer the first part, uh, because I gave up on that a long time ago in terms of trying to make the high level stuff behave, you know, like, you know, writing Java code that translates into a particular native code, which translate, you know, using, avoiding certain instructions like, you know, you just can’t prevent that kind of thing.

Even the compilers kind of the enemy when he comes to C Code. So it’s like, yeah. You know, you know, DJB does like hand coded assembly implementations of a lot of his algorithms to try to avoid stuff like that. . Yeah. So it’s, it’s, yeah, it’s, it’s funny, I mean, in the embedded world you have, you’re, you’re much closer to the, the sand and yet even there, there’s enough complexity that you have to be very, very careful and.

Do lots of testing and validation and stuff like that. So, yeah, I, I, I don’t, I don’t know if there’s really an answer to that that, of things that can be solved at the high level, at, at, you know, at, at the complexity that, that desktops and servers are. It’s more like you escape to a smaller environment where you do control the sand, handle your key management, your whatever stuff there, and then the rest of it sort of, you, you implement compartments that are keyed off of that, that, that, that, uh, secret state per compartment, you know, like where you have like, kind of like an SGX type design where you’ve got compartments, but the key managements handle outside the processor and the processor never has direct access to the memory encryption keys or things like that.

Deirdre: Interesting. Can you tell me more about that

Nate: Well, I mean just, just as, just as a species, we, we, we don’t work well with complexity and you know, when you have things like provable security or code verifiers and things like that to, to, to check that the implementation matches the design and the design’s correct as well. Um, automated generation of code from that, from the design been verified.

Those don’t work at scale. I work at small programs and our ability to reason about the state space of an entire system is, I’m gonna say, you know, to be confident. Maybe it’s kilobytes and megahertz and not gigahertz and terabytes. And again, that’s kind of hand wavy, but you know, it, the design and verification effort needed to, to get a processor working correctly and to be in a, in a environment with active attackers that are trying to influence things with physical control of the environment, like temperature and, and voltage and optical glitching and all these other things.

It’s, it’s, it’s a, it’s a high threat environment. And only in the smallest scale have we been able to create something that will last for a little.

Deirdre: Wow. I, I will kind of go the other direction and be like, instead of, we’re all fucked unless you invest in like all this intense verification all the way down to your, you know, to smart sand and be like most people writing software and trying to do it securely, including a lot of people writing, deploying cryptography, which is, you know, software probably don’t have to worry about this sort of threat environment.

But there are plenty of people who do, and they usually have more money to invest in this high assurance verification and they might be deploying their own hardware, which they can invest trust in. Right.

Nate: Well, there’s, there’s a, a problem of equity here, um, which, You’ve got big firms like Google that have the money to invest in both the security engineering and also the their own silicon in some cases, uh, for like TPUs, et cetera, or Titan for, for example. Um, but then you also, then you have the low end people who aren’t really experiencing huge threats.

Their problem is more like ransomware, something that’s kind of a mass scale kind of attack. And then in the middle you have companies that have, you know, there’s some kind of, um, you know, government contractor or something like that for air conditioning parts or whatever it is, where they actually have a high threat because they have some kind of criticality, but they don’t have the expertise or resources to be able to defend against that high threat.

And those, I think, are the ones that are the most, it’s collectively to society are the most risk, um, groups because again, they have that threat against them with highly funded, highly motivated attackers, but they don’t have the inherent culture and, and support for defending themselves.

Deirdre: Right. Huh. Okay.

David: I think the, the running theme on some of this advice, you know, you opened with saying avoid fragility, make things robust, and we’ve touched on limiting complexity a lot, but sometimes it’s not clear what the less fragile decision is or what the less complex design is. Do you have anything you can say about what does it mean for something to actually be robust or for something to be simple versus be complex?

Nate: A lot of it comes from enumerating attacks and deciding which ones you don’t have to care about. And usually with the robust design, you can throw out classes of attacks or vantage points for attacks. So, for example, if you’re hashing, if you’ve got like, you know, one master secret and you hash it with a serial number to get the derived keys in the factory for injecting into particular personal personalized devices.

You don’t have to worry too much about someone decapping, extracting keys from a single device because it’s like, okay, you’ve compromised that one device that you bought and you own. Hopefully we don’t have so much trust in the client side device in our, in our overall system that that’s a critical failure for the entire system.

You know, we shouldn’t be fielding things like that. It should be revoked that device and move on. And if you have appropriate monitoring and stuff like that to be able to do that, then you can respond and move on easily. But if someone’s able to from reverse engineering and, and, and dumping a key or something from a single device gain leverage against your entire fleet of devices or even smaller substance of that fleet, then you need to be more careful.

Carefully consider that kind of attack is, is in scope and also your system might be more fragile than you thought it was when you first designed it. So there’s, there’s, there’s kind of these properties based on the, the, the high level choices you make when you design it, that if, as you think through the attack tree and, and the various, uh, bits of leverage and, and approaches, an attacker would take.

If you keep getting confirmation that, Yep, that fits within my design parameters, et cetera, that’s probably a more robust design. The one where, Oh, wait a second, no, that as soon as they can do that, we now have to replace all this stuff. It’s like, Eh, maybe we should move some of that server side or not even collect that secret data or something like that if we can avoid it.

Deirdre: so this is interesting because it’s both threat modeling, thinking like an attacker, but also thinking in sort of like a complex systems like sustainability, reliability sort of person or sort of architect, I guess.

Nate: Yeah. You have to think through the whole life cycle. It’s not what attacks can I prevent from this thing, but is the entire system survivable over the long run, You know, whether survivable is profitable or. Um, you know, some kind of acceptable loss of materials or whatever it is, you know, cost of materials thrown away in the trash or whatever it is, you know, is this survivable on whatever time scale is appropriate for, for my venture, whether it’s a nonprofit or for profit kind of thing.

And that survivability is including dealing things like spam and fraud and waste and abuse, and how many people are gonna have to answer support phone calls when this thing goes wrong. Like, and does the user even know what to do when something goes wrong? You know, do I, do I call someone, do I email them?

Like, and how do we replace it in the field? Like, all, all that life cycle stuff is, is critical. And again, it gets to whether systems robust or brittle in terms of, you know, as soon as I’ve got a support incident of more than a thousand devices affected, uh, our company goes out of business. That’s, that’s not survival.

Deirdre: Yeah. All right. So cut. Uh, branching off that, do you have any like, like fits on a post-IT design principles, like pity words of wisdom for cryptographic systems, either at the scheme level or the like software system that you actually deploy level, uh, to impart based on that sort of. Sort of holistic view of life cycle management.

Nate: Every system has to solve the fraud problem if it’s sufficiently large, you know, and that could be, you know, everything from spam, like I said, to impersonation to just annoyances. You know, at some point you’ll have to have some staff dealing with the support and responding to attacks or, or issues with it.

Uh, another one would be count the number of bits of state. Uh, Paul would at Cryptography Research would say count every flip flop in your system. And, you know, if you can’t even enumerate all the flip flops or to be able to say something. Reasonable about the use of those flip flops or, or what would happen if an attacker control can control one or more, or a subset of these flip flops, then probably the system is too complex to have high assurance.

So things like that watch out for debug access. And I guess in the, in the case of, um, support software, this would be the console that your customer support representatives use to reset passwords or whatever kinds of things. And at a high level, but at the low level it’s like, you know, J tag ports or whatever, other kinds of, even just board level or even on chip level wires you routed just to assist in troubleshooting a particular problem, refusing off something which is actually dangerous if it gets re-enabled.

Watch out for debug.

David: right. Well I think that’s part two. We’ve gone on for a while. Um, so thank you for coming back out.

Thomas: you’re, uh, you’re consulting again, right?

Nate: Yeah. Root Labs is, is active. We’re, we’re, uh, actively doing projects.

Thomas: uh, the small audience for this, uh, for this podcast I think intersects pretty nicely with Nate’s customer base. And, uh, yeah. I highly recommend people check Nate out at uh, rootlabs.com. This has been a Root Labs promotional episode, and, uh, I’m happy to have done it.

David: And don’t forget to go to merch, dot security, cryptography, whatever, and buy uh, a merch that is not entirely black.

Deirdre: Yeah, it comes in pink now. Nate Lawson. Thank you very much. This is good. This is

Nate: It’s been a blast.

Thomas: Nate.