Alex Gaynor

Alex Gaynor

We chat with friend of the pod and special guest Alex Gaynor, former deputy chief technologist at the FTC and all around good Security Person™. Join for nerdery about WebAuthn, stay for accidentally melting down GitHub APIs around November 2020!

Watch on YouTube: https://www.youtube.com/watch?v=gBoGvyvsSi4

Links:


This rough transcript has not been edited and may have errors.

David: Hello and welcome to Security Cryptography Whatever. My name is David.

Deirdre: I’m Deirdre.

Thomas: I’m Thomas.

Deirdre: Yeah.

David: And with us today we have special guest Alex Gaynor. How are you doing, Alex?

Alex: I’m doing great. I’m excited to be here. Thanks for having me.

Deirdre: Woo.

David: Alex, I would say, has been a long time listener, but I’m pretty sure he’s too smart to actually listen to us. But we have wanted to get him on the podcast for a while to talk about a number of things and we conveniently find him at a break in his career and so thought now would be a good time to talk about why in the world the FTC would want to hire some sort of technologist. So why don’t you say maybe a little bit about yourself and how you ended up at ftc?

Alex: Yeah, so I am by background, kind of, we’ll call it mixed specialization, software Engineer. And from 2022 to mid 2024, I worked at the US Federal Trade Commission as the deputy chief technologist. And so yes, how I got there, I mean, the shortest answer maybe is that I ran into somebody who’d been a colleague in my first stint in government back in 2016 or so, and she had just become the chief technologist at the ftc. And perhaps knowing me better than I know myself, said, you’ve got to come here. Like, I think you will enjoy this. This is like an opportunity to do like very interesting things. You’ll be good at this. And I had kind of just wrapped up a project I was working on, so I needed to find something new to do anyways and I jumped in.

Thomas: So your first time in government?

Alex: No. So my first stint in government was 2015, 2016. Right. Kind of at the founding of the U.S. digital Service, which was the group created kind of after the healthcare.gov crisis and recovery efforts, with basically the mission of bringing more folks in who had kind of hands on, particularly private sector technology experience to help the government deliver and the services it delivers and the kind of services it operates better. I took a couple years away from the federal government in between and then came back in 2021 and the digital.

Deirdre: Service did like login.gov and things like that. They did other things, but that’s one of the things I’m like, ooh, this is nice.

Alex: Yeah. Login.gov was a joint project between USDS and sister agency 18F.

David: What is the difference between USDS and 18F? Because I see them both on GitHub.

Alex: Yeah. So I mean, you use the present tense is which I think is much harder to speak to.

David: Okay.

Thomas: In the past has a new name now.

Alex: Yeah, in the past tense. The difference I would have said is 18F overwhelmingly works on kind of greenfield projects that they are brought in and asked to work on, whereas USDS gets sent to places to make sure kind of in progress projects become successful. So this is sometimes described as 18F is the vampire and USDS is Kool Aid Man.

Deirdre: Okay, Vampire. Okay.

Alex: Vampires have to be invited into your home.

Deirdre: I see. I was like, they suck the blood out of some executive branch place. Oh, that’s very.

Alex: They have to be invited in. Whereas like the Kool Aid man slams down the wall.

Deirdre: Oh, okay.

Thomas: Am I crazy or did DS happen because of the Obamacare rollout?

Alex: Yes, yes. There’s. There is this kind of healthcare.gov catastrophe. A bunch of kind of folks with private sector SRE experience come out to help put that together. And then at least in the pop culture telling the President says, why did none of these people work for me? And this is kind of the opportunity for folks who were already in the government who had been trying to get this thing off the ground, like Jen Palka, to say, yep, we’ve got the plans ready to go. Like, just give us permission to start hiring.

Deirdre: And. And was that like the. We can talk more about this. Is that like the root cause of like, why healthcare.gov, the like first version, was basically a shit show because it was had bad like site reliable reliability engineering and load balancing and handling of traffic and all that sort of stuff.

Alex: I mean, root cause is maybe it’s hard to like, you know, if you did like the five whys exercise, you’d be like 15 whys deep and probably still not a singular root cause. But to the extent there is a single cause, it was that it was a system designed by dozens of contractors which had never processed so much as like a test transaction before it went live. You know, it had never been load tested. It had not really been. It had been developed the way you might develop like the landing page for some sub component of the government that like posts a website with like news announcements and like bios for its executives. Not like the way you would architect like a consumer service that like does many transactions per second.

Deirdre: This, this hates, this hurts my soul.

Thomas: Yeah, just given the timing, I would have assumed that that would have been like one of the first times the, you know, U.S. government was ever in a position to like, stand up. You know, a continental scale says basically.

Alex: Yeah, see in many respects it’s kind of the first one, and particularly the first one that has like that degree of public attention on it. Right. Despite Obamacare becoming law in, I want to say, 2010 or 2011, this website doesn’t go live until 2013. So, like, you know, it’s had this really extended rollout period in which healthcare, you know, the ACA is becoming more controversial. There’s just a lot more interest on it. Incidentally, the website launches kind of during Ted Cruz’s filibuster that shuts down the government. So first people don’t notice that the website doesn’t work and then the filibuster ends. And like, the website is still not working.

Thomas: The chronology of this is kind of breaking my brain. And like, I forget that healthcare.gov didn’t roll out until after the reelection.

Alex: Yeah.

Thomas: Anyways, FTC versus US Digital Service. What was the working environment?

Alex: Yeah, so, I mean, at usds, the thing you were being asked to do was go work with different government agencies to help them deliver a software project successfully. You know, taking kind of the combination of what you learned about what the government is trying to execute on what the policy goal is and what you know about delivering successful technology projects, mostly from a private sector background. In contrast, the role of a technical person at the FTC is not to help the FTC build software. It is to help the FTC’s leadership and attorneys and economists understand how technology works and sometimes kind of the dynamics of technology markets in order to execute on the agency’s mission, which is consumer protection and competition, law enforcement.

Deirdre: Can we talk Oracle v Google? Were you there for Oracle v. Google?

Alex: I was not at the FTC for that. I was very much following Oracle v. Google. And in fact, I organized an amicus brief of former government software engineers.

Deirdre: And for context, this was all about Android and Java APIs. And I think we technologists became very much fans of the main judge who taught himself Java and programming and had informed opinions about being able to copyright ensue over the API for an ad function or something like that. So. But yeah, not that one, but other cases. Is there a favorite FTC related technology case or thing that you worked on while you were there?

Alex: So, I mean, it’s always hard to kind of say favorite. Like, I could pick like maybe three favorites. One that I particularly enjoyed that is maybe on topic for this podcast, is I worked on a case that the FTC ultimately settled against, MasterCard, which was about the enforcement of something called the Durbin Amendment, which basically says if you have a debit card in the Us it has, your issuer has to make it possible to process transactions on multiple networks.

Deirdre: Right.

Alex: So like you have a MasterCard brand debit card, but there’s also a second network on that card which is often referred to as the back of card network and it has to be able to process transactions as well. And what the FTC alleged is that basically MasterCard had made it more difficult to have transactions from a digital wallet like Android Pay or Apple Pay be processed on the back of card network. Because of how tokenization worked, your Android or Apple Pay wallet doesn’t actually store the credit card number, it stores a temporary token. And there is effectively no way for a back of card network to translate the token into the actual card number to send to your bank.

Deirdre: Oh, that’s fascinating. So something that almost from my perspective seems like a way to like separate responsibilities and make sure that something is not compromised. Like someone like if this was broken on your phone, someone could steal your credit card number. And so the way that this kind of works is like it’s sort of ephemeral use. Right. But that ended up being breaking the law.

Alex: Yeah, I mean it didn’t have to be. So like MasterCard already had systems where for other forms of transactions, including tokenized transactions, back of card networks could call out to them and basically get this transaction verified and get the actual card number you sent to the issuer. The issue was for certain Apple Pay, Android Pay transactions, they did not. Which in addition to this law saying there has to be multiple networks, it said if you’re a network, you’re not allowed to make it harder for somebody to have multiple networks. This is one of those cases that’s on my short list of things were fun because figuring out did this violate the law involved a lot of learning about how debit card transactions are processed. Going to each step in the process and asking people, please explain to me. Merchant, stripe, bank, like explain to me how you understand how the data is flowing through the systems and like you know, finding write ups of like the cryptography involved in these systems. Like what is a token? So like, you know, it was just kind of a lot of fun.

Alex: For that reason.

Deirdre: I didn’t even know there was a back of card thing.

Alex: Like so I mean there’s no reason to know unless you are basically a merchant. And merchants like this because it gives competition between the debit card networks and that gets them lower rates.

Deirdre: Yep. And everyone who has gone to a place and they’re like, no, we do not take American Express because the fees are Too high can understand why you want multiple options to execute a transaction because you don’t want to get locked in or you don’t want to get locked out.

Alex: Yeah.

Deirdre: Do we have an inkling of how multiple implementations of this sort of solution, which is Apple and Android, or Android and Apple and Google arrived at the same solution, which is not, well, supporting the back of card network? Is it just like, oh, we didn’t realize and we copied each other or was there something more like let’s see if we can get away with this until we don’t have to?

Alex: Yeah. So I mean, I think Apple and Google were like ambivalent about this. Right. They don’t have a stake in this one way. But like my recollection understanding is Apple rolled out Apple Pay first and kind of got a handful of credit and debit card networks on. And they kind of, you know, they designed the interface kind of between themselves and not kind of what MasterCard or Visa’s relationship with other networks would be. I mean, it’s also the case that these systems all look slightly different in different countries. Like it’s.

Alex: There’s just kind of different network layers in different countries. Like I believe Canada has like approximately one network and like everything just goes directly to your bank, more or less. Okay, don’t quote me on that, but that’s my recollection.

Deirdre: Cool. All right, so that was one. Do you have another? The other two of your top three?

Alex: Yeah. So the other thing that I really enjoyed working on was the FTC does consumer protection cases which include data security, which is largely, though not exclusively kind of data breaches. And I got to work on kind of a series of cases there primarily focused on kind of what the terms of settlements were. So that when companies settle these cases, they basically agree to injunctive relief, that is to say like relief that changes their behavior. And they agree that they will from now on run a reasonable security program that kind of effectively protects consumers information and has, you know, they will study the risks and like implement particular implement safeguards they find necessary. But also the FTC will enumerate, you know, generally half a dozen or so particular safeguards. And I worked basically across a series of those to introduce kind of more modern best practices. Like you will not just have mfa, you will have phishing resilient.

Alex: Mfa, right. You’re going to use something like, you know, we don’t write like webauthn or you know, if you wanted x509 certificates. But like it has to be phishing resilient. There’s a case that requires the company to manage their infrastructure with infrastructure as code basically and actually get code review on infrastructure changes. It’s kind of like a whole series of these kind of modernization things. I worked with attorneys to not just get into individual cases, but really get into the toolkit for when this is on point for the case. This is the best in class thing we’re looking for.

Thomas: These are like the commercial equivalents of consent decrees. Then they’re like a negotiated settlements enforced by.

Alex: Yeah, yes. You know, the FTC occasionally litigates these cases, but the vast, vast majority of them are settled between the FTC and the company.

Deirdre: Do you feel strongly that those are being well enforced and well, well followed in practice?

Alex: I think so. There’s kind of a non trivial lag time between when a thing ends up in a consent decree and when, when you know, you would actually find out if it’s, you know, if they’re not actually following their obligations. But there was some good anecdotal evidence that like people were paying attention. Like just in the course of negotiating one of these before the kind of ink had been signed, like one company was explaining to us, so they’d purchased, you know, tens of thousands of Yuba keys.

Deirdre: Yeah.

Alex: So like, you know, I think I, the very least, at least that company got the message.

Thomas: Purchasing the Yubikey is one.

Deirdre: Well, hey, like come on, this is exciting, this is nice.

Alex: Dare to dream they actually deployed them.

Deirdre: Yeah. Okay. And what’s number three?

Alex: Number three was probably the FTC’s lawsuit against Meta alleging unlawful monopolization and that was particularly interesting. So that case is kind of a, there’s like a ton of like deeply like technical economic questions about like what market like Meta’s products are in. But one of what are called, if you’re defending one of these cases, you can assert what are called pro competitive benefits or justifications, basically saying notwithstanding the bad stuff you accused me of, actually this was good for, this was good for competition. And in this case in particular, Meta alleges that their acquisitions of Instagram and WhatsApp were necessary for them to be able to scale because of the kind of infrastructure that Meta provided to them. And so that was just like a ton of fun to like dive into. Okay, well like what, what were these companies, you know, now kind of a while ago? What were their pre acquisition infrastructure like? Well, what was their migration to Matter’s infrastructure like, you know, what kinds of metrics exist and like I’m a little limited in how much I can talk about this, it’s, it’s still in trial now, so there will be more things in the news in the weeks to come about this. But yeah, that was just like a blast. And like, particularly Instagram’s tech stack at the time of the acquisition, and I mean through to today, as far as I know, is like Python and Django, which are really.

Deirdre: Yeah, holy shit.

Alex: Which are like, those are like, I, I’m mostly emeritus, but like, I’m a. I was a core developer of both of those at the relevant times. So, like, it was like, it’s cool to like, kind of be looking at the, you know, probably the largest Django website on the Internet.

Thomas: Yeah, I was on a team that did an assessment of that code base a long, long time ago, so.

Deirdre: Of the Instagram code base. Yeah.

Thomas: This is like a thing I don’t understand about. It’s a thing I don’t understand about the discourse about the, the, the, the, the trial here is just like Instagram, when Metabot them is not Instagram now. Like, they’re different products. Like, there was like, it was, there was like a nascent kind of social engine, as I remember, there was like a nascent, you know, social network thing going on with Instagram when they got bought. But they are a social network now, right?

David: Like, sure.

Thomas: And to the extent that happened, Meta made them that.

Alex: I mean, so the exact facts of these are just like a huge question of case. Like, I think the. What. The. What I would tell you. So I’ve been sitting in court every day for like the past three weeks, kind of just listening to the arguments.

Deirdre: And because you’re, you’re out of the FTC now, so you’re no longer there. Regular dude hanging out in court.

Alex: As one does. As one does.

David: But I, who knows who you might see in that courtroom?

Alex: I think the through line is basically Meta was very much struggling with getting Facebook to be popular on mobile kind of in roughly this, like 2011, 2012 period. Like, you may recall, like, they had initially rolled out an HTML5 mobile app that was like, slow and like not a positive user experience. And so this kind of provided a window for Instagram to build an alternative social platform on mobile. That, that was like really mobile first. That used kind of photos as the centerpiece of communicating with your friends and family. And Meta acquired them in an effort to prevent them from doing that. I mean, it’s interesting that you say, like, obviously Instagram is a social network today. Cause key thing Meta argues in this case is that people maybe think of it that way.

Alex: But actually what Instagram is mostly is it’s like a video platform. Like you go there to watch videos the way you might on TikTok or YouTube shorts.

Thomas: Well, that’s not true.

Alex: All right, well, you can, you can let them know.

Deirdre: Yeah. One of the things that I find interesting about the Instagram wouldn’t be what it is today without Meta argument, is that the advertising solutions that Meta brought to them or then Facebook into Instagram, the platform seems to actually have like, really mattered because like you could make an argument that like Instagram would have been pretty big even if they hadn’t been acquired, if they figured out their infra and they were able to scale and all that sort of stuff. But like, it does seem that that is like one of the things that really seems to make a difference at least from the business side and the user experience of the advertising solutions and stuff like that. So I find that kind of plausible. The other stuff, I’m just sort of like, yeah, maybe.

Alex: So kind of the ability to monetize via advertising is definitely also another thing Meta has claimed as a benefit of the transaction. It’s interesting. Kevin Systrom, who is kind of one of the two founders of Instagram, was the CEO, testified the other week that they basically had shown designs to the board for advertising on Instagram pre acquisition that look remarkably like Instagram’s ads today. So my impression is, at least in his mind, they would have been able to execute on this stuff. The same from a kind of consumer perspective. One of I think the FTC has been pushing is that by kind of being owned by Meta and by being kind of together forming a monopoly in this, what the FTC calls the personal social networking services market, Meta is able to raise what’s called the ad load, basically the percentage of items in your feed that are ads that higher than they would be able to kind of independent services that have to compete for users more.

Deirdre: What’s interesting is that they also acquired WhatsApp and WhatsApp is part of this case as well.

Alex: Correct.

Deirdre: It seems from the testimony so far, the founders of WhatsApp were like no ads, no, no ad injections ever. And so far that seems to have been the case. But not without the parent company of Meta really trying to figure out a way to make that work. And it still hasn’t worked yet. And so it’s hard to argue what the benefit has been for WhatsApp by being acquired by Meta.

Alex: Yeah, that’s a great point because WhatsApp is a slightly different case. The FTC does not say that WhatsApp is a social network in the same way. What they say is a thing that we had seen happen at this time period is several mobile messengers in Asia had pivoted from being just mobile messengers to also being social networks. So cacao and Line and WeChat all in different countries. And basically Meta was afraid that WhatsApp was positioned to do the exact same thing because of their kind of fast growing giant and fast growing user base. And so they acquired them to prevent them from executing this pivot. And yeah, WhatsApp founders, I think the testimony is pretty clear. Like, would not have wanted to like kind of do any features.

Alex: They did not want ads. And then so part of the case is just like, well, it would as an independent company, it would have had to make money eventually. Like, would investors have basically pushed WhatsApp to monetize in some way? Right. Develop new features, sell stickers, you know, probably roll out more social features because that gets you more kind of time and app.

Deirdre: Yeah, and like, didn’t, don’t. They had, didn’t they used to have like 199 a year or something like that? They had like a really, really low service fee to use WhatsApp in for indefinite messaging or data or whatever. And then they got rid of that.

Alex: Yeah, it varied a bit by country. Like my impression is they were kind of constantly experimenting with it, but at least, you know, in countries like the U.S. they were charging, I believe a dollar a year. And then Meta got kind of rid of that fee kind of across the board, which they claim is a benefit to consumers.

Deirdre: It is. And I wonder if this is also a function of the zero interest rate era that all of this happened in and if this were another time and place, that just would not be a thing anymore. But on the argument that the everything applied future that WhatsApp could have been a la WeChat and the others, I’ve heard an argument, independent of the arguments made in this case, that the American tech app culture just completely splintered into, you have an app for calling a car, you have your Uber, you have an app for ordering your food, you have an app for uploading your photos, you have an app for messaging your friends, you have, you have an app for all the things. And that had already happened. And so like the American tech user on mobile platforms was just sort of like, didn’t care about an everything app. Like you might in another, another country, another market. And therefore, you know, maybe if you tried to make WhatsApp the everything app and Tried to make X the Everything app. It just wasn’t going to fly because everyone already has their app for that thing and they don’t care that much.

Deirdre: Maybe you get some usage. But now I’m hearing about the argument is like, well, it might have been and Meta bought it and stopped it in its cradle. And now I’m wondering what could have been.

Alex: Yeah, I mean, what could have been is kind of definitely at the heart of this case. It’s interesting, in a separate lawsuit the Department of Justice has against Apple, another antitrust lawsuit, the Department of Justice alleges that, like, Apple has basically, like worked hard to prevent everything apps because those would kind of compete with Apple’s control of the platform.

Deirdre: Fascinating.

Alex: Yeah, I am like less conversant in the exact market dynamics of WhatsApp than I am kind of the pure tech infrastructure pieces. But yeah.

Thomas: What is the ftc? What is the deputy technologist role at the FTC with regards to these cases? What’s that work like?

Alex: Yeah, yeah. So, I mean, as a deputy cto, my job was kind of partially like, you know, management, leading the team, hiring, kind of all of that set of stuff and partially being an individual contributor on these cases and basically, you know, focus on the IC part because the, I mean, the manager part is probably not so different than being a manager anywhere else. But as an IC on one of these cases, really it’s your job to kind of understand what, you know, what is the case the lawyers are trying to investigate or litigate and how technology interplays with that and how you can use your understanding of the technology to better inform what the lawyers are attempting to do. So, you know, kind of, you know, if we think about just the full lifecycle of a case, kind of before an investigation has begin, you’ve got kind of what we call target spotting, identifying, you know, things, you know, whether it’s in the news or in security research that may indicate possible violations of laws. So, like, you’re kind of consuming your tech media and seeing, hey, from what I know about the law, what of these might be of interest to lawyers then as kind of a case is going on, kind of the first step is, or an early step is it’s called a cid Civil Investigative Demand. It’s a subpoena. Right. You’re helping attorneys craft what are the types of documents requesting, what are the questions we’re looking for answers to, and helping everybody use technically precise language.

Alex: So you’re getting exactly what you want. You’re not missing anything. You’re knowing to say hey, we want this data from your issue tracker, not just your email inboxes, for example. You’re helping review those documents as you’re getting, you’ve made a decision that you think this does violate the law. You’re helping draft the complaint, making sure the kind of technical points are accurate in a settled case. You’re helping kind of identify what are the remedies you want, what are you asking for in the settlement. You’re often kind of getting on the phone with the opposing counsel. Maybe in a security case, you might be on the phone with the company CISO where they’re saying, hey, you asked for xyz, but we can’t really do that for this reason.

Alex: And you’re saying, okay, if we tweak the language like this, would that work for you? I mean, you’re just working alongside the lawyers really through the lifecycle of the case to make sure it’s informed by, you know, what the technology is, how the technology works.

Thomas: For the, for the meta case, what is the FTC? What is the FTC’s technical staffing of that look like? Not lawyers, but like technologists that are actually like guiding and reviewing discovery and things like that.

Alex: Yeah, so there were at peak two or three of us on the case and you know, every technologist on our team was on multiple cases, so. And then in the Medicase in particular, the FTC also has kind of two outside experts they retained to expert, you know, testify in court, will draft a report explaining their findings and kind of all that you’ve written about going through this process recently, Thomas. And so in that case there was kind of outside folks as well. And so in Mata, we were also supporting kind of the experts through the process.

Deirdre: So do you get to be one of the first people to get discovery or to see some of the technical discovery? Do you get to see source code and other cool, possible, juicy things?

Alex: I mean, there was a ton of interesting things. I mean, occasionally you’d see source code like in an email or in a chat message. It’s mostly not a lot of source code, but yeah, I mean there’s just tons of fascinating emails debating the pros and cons of migrating between different internal databases and like, just like, I don’t, lots of like, just like, I don’t know if you’re nerd about this stuff. Like lots of cool stuff.

Deirdre: Nice.

David: A lot of those like emails and stuff end up being public eventually. And then, well, because there’s that like that Twitter account that’s like tech emails or whatever that Just posts like random emails that have come out in court between like Steve Jobs and, and like Larry Page at Google or whatever.

Alex: Yeah, the ones that are used in evidence in the case will mostly make their way into the public. But I mean, the Meta case has millions of documents that have been produced by both the parties and third parties. In antitrust cases. You have other companies that may or may not be kind of in the market also producing documents.

Deirdre: Right.

Alex: So, you know, you go through kind of the funnel of all. Right. Which of these are at all interesting? Which of these maybe support one side’s case and then which will actually get used with a witness? A witness maybe will see like a dozen documents. So like, you know, is a tight funnel. And then, you know, the public ones are obviously redacted, you know, if there’s competitively sensitive things or, you know, names and stuff like that. But yeah, so there’s a funnel, but like in some sense the most interesting or most important ones. Yeah, you actually get to see.

David: And then was there anything that like you worked on and then it determined, oh, I like, we want a technologist to look at this. And then actually, you know what, like this, this behavior is fine. There’s. There’s nothing here.

Alex: Yeah, I mean, it’s, it’s difficult to, it’s much more difficult to talk about those because, you know, you might want.

David: To change your mind later.

Alex: Well, no, not even that. It’s. I mean, you think about this the same way like a law enforcement agency does. Like, you don’t talk about uncharged conduct because that’s, that’s prejudicial. Right. If you talk about charged conduct because the other party gets the chance to, you know, defend themselves in court, they get the chance to clear their name. But if you talk about uncharged conduct, then like, they never get to go to court and prove like, that I didn’t do that bad thing. It’s like, you know, I can’t really talk about all the times we said we look, you know, we looked at a thing and we said, you know, this doesn’t seem bad to us or, you know, more like working collaboration with lawyers.

Alex: We didn’t seem bad. But yeah, you know, there’s absolutely cases where like, better understanding of technology or the norms or the market was like, you know, we would not be able to bring this case. And when.

David: All right, is there anything else we should ask you about the ftc?

Alex: No, I mean, I would just say like, it was a fantastic job. I, you know, if you are the kind of person who’s like, interested in the law and also is interested in kind of, like, diving into different areas of technology. Like, is fantastic job.

Deirdre: Nice.

David: So let’s talk about open source, because I think there’s a lot of things to talk about there, starting with how you have Alex on GitHub, which I think is impressive because I thought I was somewhat early on GitHub and that I created Adrian in 2010, I think. But I believe you have me beat there as well with just your first name.

Alex: Yeah, I guess all of my stories start with, you know, having friends who, like, pointed out good things. But I had a friend who, like, very much pushed me. I mean, I. I’ve been programming like a year at this point was, like, very much pushed me to have, like, apply for the GitHub beta. Oh, so, like, there was a beta that was.

Deirdre: Yeah, I didn’t know there was a beta, I think.

Alex: So there was definitely a point where, like, you had to apply to, like, get an account. Like, you know, there’s a wait list. So, like, that’s. That’s how I got Alex.

David: And I think most people know you or if they don’t know you have interacted with you indirectly when they’ve PIP installed Cryptography, or perhaps PIP installed Certify, or. Actually, you know, I’m kind of remiss to say pip, because every time I look at the Python ecosystem, the packaging story has changed. Maybe they’re PI ending something.

Thomas: It’s all UV now, right?

David: I’ve never even heard of that.

Deirdre: I’ve heard of that.

Alex: UV is kind of the.

Thomas: He looks so alarmed.

David: I feel like every day I spend as a product manager, I get a little bit dumber.

Alex: Yeah, UV is good stuff. It’s from the same folks who made Ruff, which is kind of the new, much faster linting engine. UV is kind of a reimagined Python packaging experience. It’s implemented in Rust. It’s very fast.

Deirdre: I like it.

Alex: Yeah.

Deirdre: We don’t know what RUF is. I don’t write Python no more. That’s not true. I wrote some Python today.

David: Let’s loop back to Rust later. But first, so, like, what is the Cryptography package and why are you maintaining it? And are you a cryptographer?

Alex: I am not a cryptographer. I am somewhere on the spectrum of Cryptography Engineer, which is to say, like, I think mostly about how are these primitives used? What kind of APIs make it more likely for users to use them successfully? Kind of the ecosystem and API design side of things. I’m very fortunate that My co maintainer, Paul Kerrer has a much stronger cryptography background and he does understand what a lattice is, I’m pretty sure, whereas I do not. But so what is it so really.

Thomas: Not know what a lattice is? No, no, dear Joe, he should know what a lattice is.

Deirdre: No, it’s fine. Like secretly, secretly you grow plants on. Yes, it’s like here’s a secret. All the lattice based cryptography that we’re using does not actually implement any lattices. So you’re fine, you’re good.

Alex: Outstanding.

David: I think we’ve had people who’ve literally published at Crypto on this show who have been like, I’m not a cryptography.

Alex: So I am less of a cryptographer than they are, I’m prepared to say. But anyway, we aim to be kind of a general purpose cryptography library for Python. So like the place you would go if you want to encrypt something with AES or like do an ec, DH key exchange or like also kind of a lot of the kind of file formats that exist in the cryptography cinematic universe, like X509 certificates or like PKCS 12 like bundles, that sort of thing. And then we also kind of under the same, we have this Python Cryptographic Authority brand. Under that brand we also maintain Pineapple, which is Python bindings to. Well, nowadays it’s Libsodium, but the library formerly known as DJB’s NaCl, not pronounced salt and we also maintain PI open SSL for our eternal shit.

Deirdre: I didn’t know this.

Alex: Yeah, and as well as the bcrypt Python library. But like cryptography is kind of like the flagship library for no other reason. It’s got the obvious name.

Deirdre: You’re doing more than I would take on as a software maintainer, especially all the X509 stuff. And I think you have some more stories about being motivated to re implement how these things are implemented partly because of some of that sort of stuff. This is a great service because this is stuff that people doing software, they’re just trying to get the cryptography part working as part of the rest of their software project they have to interact with. And like all of us, like highfalutin crypto people who just want to do some nice math and just make it go brrrr on the computer. They don’t want to deal with the X509 part, but the real people just try to deploy some shit that uses cryptography. They need to deal with things like x509. So thank you for helping them do that.

David: Pre Cryptography Like I remember it was like circa 2011. Like I don’t know if you guys had started cryptography yet or if it was still in its early days, but like trying to just like make an HTTPs connection in Python, like impossible. And you had to like the wheels didn’t yet or barely existed and so you like had to like app get, install, open SSL and like hope that PI openssl worked and like, like probably got VCC errors and that was a mess.

Thomas: I have a question.

David: Much better now.

Thomas: Yes, I’ve been seeing this for like 20 years now. What’s a wheel?

Alex: Okay, so wheel is. I’ve asked Alex this before. The name itself is an inside joke. It is a binary distribution of a Python package that is pre compiled for some particular platform.

Deirdre: Okay.

Alex: So like you know, you pip install Cryptography on, you know, Mac os, Windows or Linux. You, you will get some so that we have already compiled for you and you will not have to build it on your computer.

Thomas: It just means you don’t need the depths to build the library.

Alex: Yeah, particularly you don’t need the system dependencies, you don’t need gcc.

Thomas: Gotcha.

Alex: Yeah. Historically compiling Python extension modules is a very fragile thing and so it’s a huge performance and user experience boon to like the vast majority of users. Like separately. I think there’s like a big security question in like you know, shipping like dot SOS or like it’s effectively unauditable that they match up with their original source.

Deirdre: But was there, was there a, I might be confusing a debate about this in Crates IO and Rustland and in Python land. Was there a big debate about this recently or am I just. I think was it for cryptography? For Python Cryptography.

Alex: I don’t think there’s been a big debate in the Python. The Python community has been pretty accepting of these. There was a fracas in the Rust ecosystem a year or two ago because Rust does not have pre built binary distribution as part of the Crates IO ecosystem. But there was a package that basically in their source distribution was like I have just put a so in here and I will use it if it looks like the platform is correct for it. Yeah, as a performance optimization, yes there was. There’s much gnashing of teeth.

David: Well it, it violates their security model that if all CPU cycles are spent on the compiler, then none of it can be spent on an unsafe program.

Deirdre: I have lots of other formal methods languages that spend way more on CPU than Rust. C. I will not take this slander.

David: But wait a minute. What was the inside joke, the what about the name?

Alex: So most Python naming is downstream of, like, Monty Python. So pypi, the Python package index, which was originally known as the cheese shop. And you get wheels of cheese from the cheese shop.

Deirdre: Okay. I thought every. I thought everything was someone just named something Python and then everyone went with the snow snake, you know, motif.

Alex: But no, apparently, no, it’s Monty Python.

Thomas: What’s funny is I was exactly this many minutes old when I got the Monty Python to Python thing just now. It occurred. Oh, that’s why they’re doing everything with Monty Python.

Alex: Yeah.

David: Oh, me too.

Thomas: It all makes so much more sense. It’s the worst.

Alex: Happy to be of service. Happy to be of service.

Deirdre: Okay, so we’ve touched on this, but when you started PY cryptography and took over some of this stuff, almost all of this was Python wrapping C. Right. Or calling out to C. And now a lot of that has evolved away to a lot of rust. Right?

Alex: Yeah. So when we started cryptography back in 2013 or so, it was basically all wrapping what we called backends. At the time, we had this idea that we’d support multiple backends. So for a while we supported open SSL as well as common Crypto, the Apple system cryptography libraries. And yeah, those were all kind of binding between Python and C. So it was technically all Python code, but it was the kind of Python code where you’ve got pointers and can have use after free vulnerabilities. Then I think it was back in roughly 2021, we took our first steps of diving into porting some of that binding code from kind of Python C hybrid to Rust. And nowadays we’ve kind of.

Alex: We dropped the common crypto, so we only have open SL as a back end. But now the chain looks like you’ve got Python calling into a Rust extension module, which uses the Rust OpenSSL Rust Library, which still binds to OpenSSL. At the end of the day, for all of the kind of things, we think of it kind of as cryptography. So AES, RSA coming soon, mlcm, you know, that family stuff. But a lot of the things that we think of as more like file format parsing, like parsing x509 certificates, a lot of that stuff we used to use open SSL for, and now we just have our own Rust implementations of that. We’ve taken it, including, like parsing, you know, keys from PEM files and things like that. We’ve taken that away from open ssl.

Deirdre: Nice. Okay. Because for some reason I thought you had like literally like we’re shipping all of it and not just linking out to openss for those parts and rewritten all of a pile of C with a pile of rust and it’s like. Okay, well we wrote like some of the like most gnarly surface area that we control with some rust, so. Okay, I get it.

Alex: Yeah. So I mean we have a lot of, you know, tens of thousands of lines or maybe 20,000 lines of rust code these days. So it’s non trivial. But we’re still reliant on kind of OpenSSL and we will also support libre SSL and boringSSL and AWS LC. But for now kind of all of our official distributions and I’m sure 99% of our users are kind of.

Deirdre: What about Russell’s?

Alex: Yeah, so what about something else? So I think we have a lot of interest in what a post opensSL world looks like for us. You know, Russell’s itself is, I mean it’s mostly a TLS library, so we need to use kind of the underlying cryptography library. Whether it’s, you know, it’s a rust crypto or AWS LC or something like that. We have a lot of interest in that, in what something like that looks like. We’re not super happy consumers of open ssl. There’s like a lot of logistics between here and there and particularly figuring out how to manage the kind of conflicting needs of many of our downstreams. So like using kind of open SSL as the lowest common denominator means, you know, AWS can just drop in AWS LC and Google will drop in Boring SL and Red Hat can use it with the system Open SSL they’ve like done FIP certification on. And so, you know, kind of any move we make away from this is going to be disruptive to somebody.

Alex: So we have kind of the question before us of how do we come up with a plan that mitigates that.

Thomas: Yeah, Is there any, is there any notion of upstreaming the 20,000 lines of rust that you guys wrote?

Alex: Upstreaming to who? Yeah, to open ssl.

Thomas: Right. The path that you guys are cutting through open SSL is disproportionately the stuff that people care about. Right. Like the reason you’re doing this is because people make TLS connections from. It’s what everyone uses OpenSSL for.

Alex: Yeah, well, I want to hang like a little asterisk on that. In the sense I do not think we have a good understanding what OpenSSL uses. Sorry. I’m sure by volume the number one thing people do with it is they use it with Apache or Curl to make or receive HTTPs connections. I’m sure that just has to be right. I think by volume of participants in the OpenSSL ecosystem, there’s actually a huge amount of weight on like other use cases just in terms of who participates in the open slow process that is actually weighted, I think probably far from, you know, people who just want to make TLS connections. But like if the open ssl, you know, community were interested in upstreaming some of this, it’s absolutely a conversation we’d add. Part of it is like we’ve made pretty different design decisions than them and it’s not like it’s not super clear how you map like a backwards compatibility path.

Alex: Like a fact of their X509 and really ASN1 APIs in general is there’s like, there’s a lot of small allocations, like you have all of these subfields on an X509 certificate and you know, many different, like tiny strings and like each one gets its own heap allocation. Whereas the approach we’ve taken is we just keep around a reference to, you know, kind of the original byte string and everything is just pointers back into that byte string with a lot of confidence. That’s safe like, because Rust, but like it’s not super clear how you would navigate a backwards compatible path between those.

Deirdre: Got it.

Alex: But like we, we’ve structured, we’ve structured most of these things as like you know, the standalone Rust crates in our source tree. Like, you know, including like our X. We have an X509 path builder that’s implemented in Rust on top of these structures. Like nice it. Like we, if there was interest, like we could make these things reusable and like designed for somebody else. Like they’re kind of minimally structured for that.

Deirdre: Now I will say that like the fact that you have your own, you have your separate X509 parsing set and all these sort of things will be good that the new MLChem and MLDSA certificate stuff that is coming out of IETF, you get to make your own choices independent of open SSL about how you parse those. And that’s a good thing.

Alex: Yeah, I mean if you look at the like the MLChem like design discussion issue we have open, like there’s like pretty strong consensus. Obviously the only like formats you should parse are like seed only.

Deirdre: Yeah.

Alex: No, like expanded form.

Deirdre: Thank God.

Alex: Yeah.

David: A chunk of this code is not like all just buried in the internals of the Cryptography repo. It’s like Alex slash my super fun ASN1 parser or something like that.

Alex: Right.

David: Like, it is a Rust library. It’s just the stuff to jam it, wrap C around it, or wrap Python around it as well.

Alex: Yeah, they’re kind of baseline DIR libraries. Its own Rust ASN1 library that, you know, it’s on crates IO like that, that is designed for reuse and, you know, signal uses it, which is like one of the coolest things I get to say.

Deirdre: Oh, nice.

Alex: So, yeah, like, that part, like the part where, like, we then use that to define, like, x509 structures is not. It’s not on crates IO, it’s not structured for reuse at the moment.

Deirdre: Okay.

David: Are you getting paid for any of this?

Alex: Early on, Paul and I got a bunch of time for our employers to basically work on this, but it’s actually been quite a while since either of us so much as used cryptography art for our day jobs, much less got paid to work on it. So this has been, for a while now, a labor of love. Well, a labor of love and a labor of ideology in the sense that we’ve become a very popular thing and we take that seriously, both as an obligation to users, but also as an opportunity to improve things. We want to be the library that pushes forward what’s possible. And by being a very popular thing, we think we’ve made it materially easier for other projects to use Rust in Python. Yeah, Extensions, for example, by, like, being willing to take some of the user frustration and, you know, push through that.

Deirdre: And I, I think you experienced a lot of user frustration. Was it when you ported a bunch of stuff to Rust and you had to, like, basically drop support on certain platforms or something like that with a major version or some version upgrade, and they were like, this doesn’t work anymore. My, like, extremely obscure platform that hasn’t updated python in like 15 years. And you had to just sort of be like, we’re sticking to our guns, folks. Like, it’s going to be better for everybody.

Alex: Yeah. When we shipped the first release that contained any kind of Rust build integration at all, that was definitely kind of a point of frustration for a lot of users. We get why people had the experience of breaking overnight. There were also subtle things like an assumption that our versioning scheme matched semantic versioning, which it didn’t. So there’s kind of a lot under the surface. There’s. Since we worked through that feedback, in particular, Alpine Linux distributions got wheel support that did not exist at the Time. So that was probably the biggest cohort of users who went from this PIP installs correctly to what is this nonsense about? I don’t have Rust C on my path.

Alex: So since support’s been introduced for Alpine Wheels, I think experience overall has been better. And I think there is a recognition that kind of more marginal platforms, your AIXs, your PA risks to a certain extent if the maintainer of your platform is not willing to invest in it. There’s just, you can’t really be expecting open source maintainers who have never so much as seen a system running on your CPU or your operating system to be the ones to do the work that IBM or whoever isn’t.

Deirdre: Yeah. What’s your. Like, it’s just the two of you, right? Like, what’s your CI infra like to try to maintain support on such a wide platform as. Or does. Does PYPI take care of all of that for you?

Alex: Yeah, yeah. So we are huge believers in CI. We often joke that we’re mostly a CI engineering project that happens to have a cryptography library. So we run. Nowadays we’re kind of exclusively on GitHub actions entirely on their hosted runners, but we have, I think it’s roughly 60 builds in the matrix of different Python versions, different OpenSSL versions, different Linux distributions, Windows, Mac OS. It’s mostly just kind of a big matrix. And the worst part is we have lots of things that we would like to do that we haven’t done. Like, it would be very, very helpful to run builds in Intel SDE which basically simulate different CPU feature sets and like, make sure everything is correct on, like, do you have AVX512 or not? Like, right.

Alex: All of those variations. Because I mean, part of this is like, we just think we’re responsible for the end product. So like, we don’t take for granted that like, you know, open SSL has implemented the primitives correctly. Like, we take witch proof, like, with this collection of like, test factors, and we run it against, you know, every one of these build matrices to make sure, you know, we’re trying to do our best to do right by our users.

Thomas: One of very few projects ever to have operationalized which proof that’s not true.

Deirdre: There’s a lot of projects that have operationalized that. You just might not hear about it and it might be just one or two of the test vectors. Yeah, like, that’s actually interesting because, like, what happens if your connection to your dependency, which is mostly open ssl, breaks something and your tests against it don’t pass and you Just turn off a feature or you ship, it’s like this has got broken or you don’t. What do you do?

Alex: Yeah, so say new release SOAP and SSL comes out, it has something broken. This happens significantly less frequently these days. If for no other reason. We have kind of a bot that has us testing nightly against whatever the latest kind of head of OpenSSL is.

Deirdre: Oh good. Okay. Okay.

David: So like are you basically running CI for OpenSSL?

Alex: I mean they have, you know, a non trivial matrix of their own. I believe we are. I think we’re very disciplined in particular. Right. Adding regression tests, which I don’t think OpenSSL always is. Um, but, and it’s, you know, there’s a little bit of a snake eating its own tail because they also run our tests in their own CI, but only on one platform. But yeah, you know, in a situation where like, you know, we identify breakage, like first thing is we’re not going to ship a release where like we, you know, our wheels statically link a copy of OpenSL. So we’re not going to ship a release with that.

Alex: We’re going to report the bug to them. We’re going to try to get some action taken. If a bug fix release is not forthcoming or something like that, we might take the step of pulling can we detect the circumstances of this bug kind of at our API layer and take some sort of corrective action or error out early?

Deirdre: That’s good. So you have a early detection and constructive relationship with your dependency and somewhat vice versa. So you’re able to avoid this as much as possible. That’s great. And now I’m looking at your CI and your custom runners and all of your infrastructure on GitHub.

Alex: Historically it was a lot more complicated. Like before GitHub had hosted ARM64 runners or before they had M1 CPUs, we ran our own like for a while we were running our own Kubernetes cluster of like ARM64 Linux to provide runners.

Deirdre: And like a Mac mini in the closet or something.

Alex: Yeah, yeah. And then at various other points we’ve had, you know, kind of other hosted CI providers, but for the last couple of years we’ve settled on GitHub Actions.

Deirdre: They pretty much have everything you need. It’s pretty awesome. Thank you. GitHub in front.

Alex: Yeah, yeah. And they, and they graciously provide us a, you know, an increase on the standard amount of like free concurrency. So we’re, we’re very, yeah. Otherwise I think running kind of 60 concurrent builds on every, on every push.

Deirdre: Yeah.

Alex: Well, less so the money and more just like it would give us such a long feedback cycle on pull requests.

Deirdre: Okay.

Alex: That I think it would really hamper kind of motivation and productivity.

Deirdre: Yeah. So they, instead of you getting rate limited on the regular rate limits, they’re just sort of like letting you not get rate limited.

Alex: Yeah, yeah. I don’t know what the actual threshold is, but we have higher than normal pre concurrency.

Deirdre: That’s awesome. So that sounds like a nice perk that you don’t have to pay for yourselves to run your open source project.

Alex: Yes.

Deirdre: Do you have opinions on paying open source maintainers to make sure that their open source projects stay maintained?

David: I in fact heard that open source added over a trillion dollars of value to the economy from Harvard Business Review recently.

Alex: So yes, I mean, I do have some opinions on this. I guess I have far fewer opinions on what if somebody would like to pay an open source maintainer. By all means, I’m not here to tell you. I am much more interested in thinking about this from the open source maintainers because I’ve been involved in Open source for 15 years at this point and more. For as long as I’ve been involved in open source, there’s been this like open question of why are more maintainers not paid by the companies that rely on them? Right. There’s, there’s so much value creation. Why. Why is so much open source happening kind of in people’s free times? You know, to some extent, you know, I think with like time stolen away from like their other hobbies or their family.

Alex: And like. So that’s kind of been the question for as long as I’ve been involved in open source. I think you’ve seen different kind of generations of thoughts on this. I wrote a blog post recently that tries to. I actually wrote two blog posts, one that kind of tries to take a look at it from like an individual maintainer perspective and the other that tries to take a look at it from more kind of systems economics perspective. But I think it’s really important to recognize that very frequently we have trouble articulating what, what it is that paying maintainers would achieve for the payees. And as a result of that, it’s hard to get people to give you money if you don’t want or if you can’t articulate what it is. And I think sometimes this is desirable from an open source maintainer’s perspective in the sense that one of the things that comes with somebody giving you money is probably a certain degree of expectations about what you will do.

Alex: And for me, at least part of my enjoyment for open source maintain maintenance is precisely that it’s much less constrained. If I am not feeling motivated, I don’t have to look at my project for weeks on end. If there is a feature request that I’m not happy with the API for, I can sit on that till we find an API that we’re happy with. I have a lot of flexibility to prioritize how I would like and for me that makes it more sustainable. Like I think I’ll be maintaining open source software a heck of a lot longer as a thing that, you know, competes with my other hobbies than it than I would if I was getting paid for it and feeling a sense of obligation. So I think there’s a trade off there that is sometimes unacknowledged.

Deirdre: I like, I like this perspective because I definitely feel those feelings of like, this is my project and I don’t have, I’m not obligated to, you know, someone contracting me to do XYZ or whatever. I have more control and I have it on my time and I’m working on it because I like it. But then we kind of fall into a little bit of the tragedy of the commons of like, you know, it’s no one’s responsibility to like make sure that this project that we all benefit from stays maintained and well kept. Do you have suggestions about that?

Alex: I feel like I do not have a lot of suggestions about how to address that. I think the biggest gap is to me is that I think we do not have a great sense of how to predict which projects have risk to their maintenance. Right. Which projects would an injection of money into actually materially impact how they are maintained? I think there’s a tendency to kind of treat all open source projects as if they’re kind of all very similar to each other. And I think it’s mostly not correct. Like open source projects vary a lot. There’s another huge problem which is for maintainers who have full time jobs, an injection of small amounts of money. While it may be very appreciative and you can buy things with money, actually it’s difficult to buy a lot of time with less than full time salary replacement levels of funding.

Alex: And I think there is a huge unresolved question of like what should you do for an open source project that like does not, you would never say the subject matter justifies a full time person on it. Like maybe like a stable compression format. Like that’s probably not 40 hours a week of work. Like what is the Appropriate way to fund that. Like even like totally blue sky, you know, no kind of ordinary commercial constraints. Like what’s the right way to, to fund something where like you wouldn’t say this is a full time person’s work and you can’t buy half their time because they have like a 40 hour week job like that. I think that pushes you towards kind of some of the all or nothing challenges that we have. And I don’t have a solution to that.

Alex: But I think identifying the dynamics is helpful because I think too much of the discourse is like companies are greedy or companies are mean and less on like what are these systemic factors and like, you know, what are the actual incentives?

David: Well, we do have a solution to that that we can steal from Paul Graham, which is to fund open source in batches. Right. Like you can, Instead of paying one person to maintain one project, you pay one person to maintain 10 projects.

Deirdre: Oh.

David: Yeah, like, and then all of a sudden you’re running a consulting business. Which is why like I feel like this whole like conversation about open source is. It’s like the no take, only throw dog meme. It’s like you can’t get paid for open source. You just need to find like the open source projects that like are valuable about some amount of people and then like work on them in exchange for money with the people that they’re valuable to. This is an option that is available right now to everybody. And like we have friends who do this in various ways. It’s just like it’s not every project all of the time.

David: And then you have like you mentioned someone asking shit from you and that kind of sucks.

Alex: Yeah, well, I mean, I think it’s important to distinguish between getting paid to work on the open source project itself versus getting paid to work in and around the project. Like, okay, you are the maintainer of a project and like you would like to get paid by somebody who like is a big user of it.

Deirdre: Yeah.

Alex: And you know, maybe part of your time will be like upstream bug fixes as relevant, but a lot of it will be like helping people at the company use it, or like you know, making the company more efficient at using it. You know, either kind of in house or as a consulting, like there tend to be like lots of opportunities to like help somebody use Python better, help somebody use Django better. Like you can definitely get paid to do that. I think of that as being like fairly different from like get paid to work on the open source project itself.

Deirdre: But the help use it side as opposed to the like Push commits to the project side. Okay, yeah.

Alex: And like there are people who like, you know, square the circle by like they charge people to like teach them how to use the project or whatever and then they, you know, take their 30 hours a week of income on that and like, and they use their other 10 hours a week to work on the project itself. Like you certainly kind of do kind of in between solutions like that.

Deirdre: But yeah, I’ve seen what David mentioned, which is basically a consultancy, but you care about the Go cryptography standard library or these other, maybe OpenSSL but another library that people really care about. We don’t just work on any one of those things. We work on an ecosystem of things so that it’s not just one person getting a full time salary or whatever to work on a single thing that doesn’t need a lot of work every single week by people who care about the maintenance and that may care about like, hey, we need this, we need ML Chem in go. Like who’s going to do it, who’s going to prioritize it, who’s going to propose it, who’s going to get it merged and code reviewed and stuff like that. Like that seems to be a at least functional paradigm. And it’s definitely not like give $10,000 every quarter to the, you know, the go implementation of MLChem that lives on GitHub and like, maybe the maintainer will have five hours every quarter to like, I don’t know, upgrade dependencies or something like that. But yeah, it’s like there’s no one real paradigm that seems to be like well established of the open source ecosystem.

Alex: Yeah, and I think Filippo is doing, particularly with go, is doing a lot of work to kind of figure out the practical realities of doing what he calls professional maintainership. So like, I mean his is the, I think, experiment and model that, you know, if it’s successful, other people will see if it’s applicable to their ecosystems as well. For myself, like I said, I, I enjoy these things being kind of unencumbered.

Deirdre: Yes, me too, because like, I like my little repos that are just me and maybe some other people. And then all the other stuff involves like setting up a consultancy or a company and all that crap. Thomas.

Thomas: I don’t know, just speaking of encumbrance. Right. Like, so obviously like one of the reasons we would be interested in talking to you, besides the fact that you’re like one of our spiritual advisors.

Alex: Is.

Thomas: That, you know, four odd years from now, when President Pritzker is sworn in, you could be the chief technologist at ftc, right? Like you could go back and like do that for real. And my question is, if you were in that role, if you were like, if you were leading tech policy at ftc, would you support a rule that says that you can’t upcharge for security features?

Alex: Would I? That’s a great question. I love that question because I think it pits kind of two strong intuitions against each other. One is like the strong intuition that we like security features and we would like more people to use them and charging for them reduces their usage. Getting more people to have SSO for example. More of that will happen. Yeah, we will get more people using single sign on if we do not charge for it. Against the second intuition which is that there are lots of like SSO is like kind of mandatory in some sense. Like if you are running like a SaaS business, there’s enterprise customers.

Alex: Like you probably have to have it, you will have to develop it. But there are lots of security features that are like basically discretionary. Like you will basically add them based on whether you think you can get new sales based on them. And like they will simply not be developed if you cannot charge for them. So like I do not think like a, I don’t, I don’t think a blanket like you can’t charge for security features rule could possibly be right. I do wonder if there is not something for certain baseline features like MFA and SSO and maybe literally only those two where you want to push. Just like you. If you think about like what a national cyber.

Alex: God, I can’t believe I just said national cyber. A national information security goal was like you would say like we want like basically all businesses like using single sign on whether they’re small businesses or like whatever. And that will only happen if we drive these costs way down. And so I, I would love less intervention area like policy ideas for like how do you make it so every small businesses like Quicken account as single sign on. I would, I would like something less interventionary for like how do you solve that problem?

David: You kill saml. That’s how you do it.

Alex: Everything is cheap.

David: SAML is expensive.

Thomas: Like I don’t think that actually solves any of the problems now. I think that smart organizations already are killing off SAML themselves. Like I don’t know. I find most of the discourse about this topic to be kind of blinkered. So like one way to look at it is that people are charging for SSO and that retards uptake of sso. The Other way to look at it though is that, you know, people paying for SSO is subsidizing smaller customers. Yeah, like, there’s. There’s really not much of a difference.

Thomas: Right. Especially when like. Yeah, I think you yourself kind of acknowledge it’s the reason there’s a SAML upcharge or not a saml, like an SSO upcharge. There should be a sample up charge.

Alex: I agree.

Thomas: But like, the reason there’s an upcharge for SSO has nothing to do with support or cost basis or anything like that. It’s just. It’s a really pure customer segmentation thing. Right. Like, they’re like the cohort of customers that you want as an enterprise, you know, company. Right. Like that cohort. Cohort is almost.

Thomas: An. Almost uniformly required to have sso.

Alex: Right.

Thomas: So like the right way to look at it is just like the account that has SSO is what the service actually costs. Right. And every other account tier is not the real service.

Alex: Yeah, I think that’s like just descriptively correct and like, sorry, like, not just, it’s like clearly descriptively correct that there’s like, this is a customer segmentation mechanism and the like, free tier users who have effectively the same product are being subsidized. I think it is if you’re just like looking at it as like a policy outcome, like, does it produce the outcomes we want? Like, I think you get suboptimal outcomes.

Thomas: Is there evidence for the suboptimal outcome? I get what I think I understand. Like, if you look at the last 10 years, given the SSO tax. Right. Have we seen decreased uptake of SSO as a result of it? Is there evidence for that?

Alex: I can, I can’t point you into a study, but like, I have a fair number of friends who like, work with, you know, small businesses and like a lot of their customers are roughly in the position of like, we are not going to spend, you know, $3 extra a month for each account for each user or like, you know, that’s the optimistic case. Right. There’s a, there’s a lot of SSO taxes that are, you know, significantly, we’ll say, more segmented than that. And so it would be utterly shocking to me if, you know, not just on the margin, but like in the whole, like, there was not significantly reduced single sign on. Like I said, I would like something less interventionary than like, it’s illegal to charge for this because like, I think if you make it illegal to charge this, you get less of it. Right. Like that’s that’s kind of like economics 101. So like, I am, I am far more interested in like what are holistic policy changes one could pursue on just how we kind of price the externality of bad data security than we have right now and like kind of let the chips fall where they may in terms of like what, what protections companies choose to like implement.

Alex: Like, I think right now you have this problem of what is the cost of bad security is maybe very difficult to predict. It is the through line between, like, if I don’t do this, I will have this a breach of the severity and like what will my punishment for that breach be? Is, is insufficiently predictable for people to invest intelligently.

Deirdre: And I hate that. But it’s definitely true. It’s, it’s. You can make an argument that this could be very, very bad, like a breach that happened over there to a similar person in your market share with a similar size customer base with similar kind of data. But you can’t deterministically with any kind of confidence actually say that.

Alex: Yeah, sorry. I was gonna say even if we stipulate, like I will have, you know, if I don’t do this, I will have a breach, like what will the consequences for my breach be are I think far too unpredictable. So I think policy that betterly better forces companies to internalize costs of bad security are like much better. And then like we, we let the chips fall whether. Right. Like let that incentive guide how companies choose to invest. If they still don’t want to buy sso, they don’t buy sso.

Deirdre: What’s an example of a policy that internalizes those costs as opposed to externalizes them onto users?

Alex: Yeah. So the policy I’m very interested in is basically a strict liability regime for data breaches. So like we, we’ve got like a, basically a law that says kind of strict formula if you have a data breach, like we look at how much data was breached, like what, what types of data elements, you know, how many rows, how many rows of data. And then like here, here is the price for this. You know, it’s close to trivial to go into court and get that penalty. There’s some really a cool economics paper from I believe George Mason University in the Michigan University, like Law and Technology Law Review for David of course. And like they run through kind of the economics of it. But like basically the thesis is the if the company knows what it will cost when they have a breach, they know better than anyone else what kinds of safeguards are likely to impact their Risk like they know way better than some like external regulator does.

Alex: Like what’s, what will be good, good for their company and if they, if they know what the price is, they can act efficiently. It’s like basically the core thesis, what.

Deirdre: If imposing an SSO tax is like a multiplier on a data breach or something like that or the lack of phishing resistant authentication.

Alex: Yeah, I think there’s probably lots of like interesting variants like you know, from you didn’t have the security mechanism. I’ve also heard people talk about if the data is really old and you didn’t have a good reason for retaining it, that’s some sort of multiplier. I think lots of these are potentially very interesting. I think about two things. One is just how administrable is the system? Is it predictable to the end user and is it straightforward for a government agency to actually bring cases based on this? You don’t want the situation where bringing these cases like takes a lot of staff time and so like you kind of only do it sometimes. Like that’s, that is inefficient from like a company’s pricing their risk perspective. The second factor I think about is like, I think probably all of us in this call and like probably all of the listeners have like an incredibly strong intuition that like phishing resistance MFA is like off the charts good, like ROI in terms of reducing your breach risk. The concern I have is if you put that into law, somehow you get kind of a distortion fact.

Alex: Right. Companies have a stronger incentive to patch that risk. They do some other risk. Let’s say we put a 5 times multiple huge multiplier on no phishing resistant MFA then somebody has an incentive to do that. Even if there’s something else that is two and a half times higher risk of actually happening.

Deirdre: Yeah.

Alex: You get these distortions and maybe sometimes you kind of want that distortion because it causes the entire market to coalesce. Right. Everyone always has webauthn. And the ecosystem effect is worth that.

Deirdre: The consumers are more used to it.

Alex: Right. Employees. Yeah. Right. Clearly. Maybe, I don’t know. Clearly sometimes the value of the ecosystem effect may be worth the distortion. But I guess I am nervous about, you know, reaching for that thing always.

Alex: Right. I think the best incentive is like when companies are, you know, they care about the on the ground realities and not kind of the 50,000 foot view of like across everyone. Right. I think we’ve all probably been in the situation of some regulatory requirement or some policy says you need protection X and you’re like this protection makes zero sense in my context. Like, why are you wasting my time? And like the answer is like that person is like in the best case, that person’s like, yeah. In general, like, I get it doesn’t apply to you, but in general we want everyone to have this because it’s easier to manage our across the board risk that way. But like as a person in that circumstance, you’re incredibly frustrated that like you’re basically being asked to spend time on things everyone agrees for you are not good use of time, even if for somebody else they are. And like you don’t want.

Alex: I think that’s a very bad kind of. I don’t think people really talk. This is a bad user experience for a regulation. Right. That, that is a form, that is an experience that will breed resentment to the regulation.

Thomas: That means bad rx.

Alex: Yeah.

David: So when you look at SSO tax, where do you see a wall of shame or do you see a leaderboard?

Alex: I do not see a leaderboard. I mean I have also been leaderboard. I’ve also been the guy responsible for security at like a small to medium sized company. Like frustrated that like I, you know, I was gonna have to justify, you know, some budget difference because like we were trying to take security seriously but we were also like small. So like I think by far the best outcome is for companies to give a little bit. Right. Have the free tier that’s, you know, the, you know, your OKTA and your G suite oidc. Right.

Alex: Like, you know, find, find the slice that like still gets you most of your customer segmentation but like give a little bit for free like that. I think it’s just such a pressure release valve for any sort of policy intervention.

Thomas: That’s basically what we do is like Google and GitHub for free and then if you wanted to do something else.

Alex: Yeah, yeah. And like, you know, we talked about competition to start. I worry about like encouraging only the largest OIDC providers. But like, you know, that’s the best compromise I’ve got standing here today.

Thomas: I like discouraging people, I like discouraging people from using small OADC providers.

Deirdre: Yeah, yeah. Because like I’m like, you don’t support okta and I’m like, yeah, okay, last thing, can you tell us about this little website that was very popular in late 2020 that scraped the New York Times poll results data?

Alex: Yes, yes. So people with long memories will remember that the 2020 election did not end on election night as is kind of traditional for almost every American election or.

Deirdre: At least abstracted that way.

Alex: Yeah. So a thing that was created, I think several of you contributed, was we basically made an alternate viewer for. I mean, we scraped the New York Times data every couple of minutes. We put the raw election result data in a GitHub repo and then we had a HTML page that rendered it out and had what at the time were some novel statistics. I think was the important thing, the most important of which was the hurdle, which was a measure of the estimated remaining votes. What percent to be pronounced? Yeah, remaining votes to be counted. What percent would the, you know, current as counted the second place candidate need to capture in order to win? And an important fact about the 2020 election is because of different state laws, there was a really high correlation between vote counting order and partisanship. You had states that counted their in person ballots and their vote by mail ballots, kind of basically in different orders.

Alex: And so this hurdle was like a really effective way to see who, who was, you know, how close the race actually was. Because you’d have races that were like 50% counted and like, you know, basically like where they were at 50% was very, if you just looked at the raw numbers, was very unpredictive of who was likely to win in that state. So we kind of put this thing together over the course of election week. Many folks contributed. It, I think kind of went viral on the Internet. It was, I mean, it was a useful experience. I think we got a lot of positive feedback. My personal favorite was like a local Georgia news station emailed me to say, like, this was like, incredibly useful to our local election coverage.

Deirdre: Yeah.

Alex: And that was great.

Deirdre: And of course Georgia was one of the ones that like turned blue and it was like a whole big deal. And then, and then eventually they had like Senate runoffs and a bunch of other shit.

Alex: Yeah, yeah. Georgia was one of those closed states that took, you know, quite a few days to count. My, my. We. We had kind of, you know, GitHub issue tracker where people were filing issues. My personal favorite is, you know, somebody files an issue and they’re like, I, you know, I don’t really know how to program, but here is this like latex math paper explaining why like your hurdle calculation is wrong in the face of third parties. And it’s like, you know, fantastic, like, fantastically work through like math problem. So like, it was a, it was a very kind of positive experience.

Alex: It was a much better thing to like throw yourself into like building this thing than like actually refreshing. You know, the New York Times website, like, this was Much healthier.

Deirdre: Yeah. And I think, I think New York Times and maybe other, you know, election tracking sites, like borrowed the whole hurdle notion or they might have called it something else. I think it, you know, whatever. Whatever they called it, but like it literally showed up and it was not anywhere before. Before you’re saying.

Alex: Yeah, yeah. When the New York Times had their pages for those Georgia Senate runoffs, they also had kind of the hurdle metric. I don’t know if we’ve seen it in elections since now that there’s. We’re not kind of in peak Covid. There’s not that kind of correlation with voted by mail. I believe the hurdle was your invention, Thomas.

Thomas: I remember. I don’t have much more to say. I was wondering. I sometimes wonder if I invented it or if somebody like, if we were just talking about it all at the same time. But I remember it and I like the hurdle.

Deirdre: Yeah, yeah, yeah.

David: I also seem to remember that like at some point we got some like a screenshot of like a MySQL database at GitHub that showed that like the GitHub pages for the Alex News Network was like far and away the most popular GitHub pages like ever then.

Alex: I do not remember that. I do remember at one point we built a system with the page would hit the GitHub public API to know if there was new data and automatically refresh. We definitely got somebody who showed up, was like, hello, I am a GitHub SRE. You are melting this API. You are some outrageous percentage of the traffic to this. Can you please hit this other endpoint that’s cached? And we were like, oh yes, we can do that.

Deirdre: Like, yes, we will gladly like hit the run the read through cache. Like thank you. We didn’t know about that one. No, I remember that. And they were just sort of like. Because like the first, the first versions of the website were really, really dumb. It was literally just like hit, like it would scrape it and it would like reload and hit GitHub like every time. And like we were using Git as like the index for the database, like, or you know, whatever the database.

Deirdre: And then it became smarter and better architected as it became a little bit more popular.

Alex: Yeah. And I was satisfied with the product experience of like there is a. Txt file and I run git log to see the revisions. But wiser heads prevailed and there was an HTML with a table element and that was better.

Deirdre: Well, we just wanted to let the people know that if they ever use that website in the. The stressful days of November 2020. Like that, that was basically you that made that work. And we all just jumped in and scurried around.

Alex: I think my lines of code, I wrote a tiny, tiny fraction of that. I get credit for starting the GitHub repo and maybe helping project manage a little, but I think a lot of other folks did almost all of the work.

Deirdre: Well, we call it the Alex News Network for a reason, so. All right.

David: Well, Alex, thank you very much for joining us. And we’ll not talk about what you’re doing next, but we’ll assure all of our viewers that you’re employed and you’ve missed your opportunity to hire Alex so.

Alex: Well, thank you for having me. This has been a lot of fun.

Thomas: Thank you very much, Alex.