Python Cryptography Breaks Up with OpenSSL with Paul Kehrer and Alex Gaynor

Python Cryptography Breaks Up with OpenSSL with Paul Kehrer and Alex Gaynor

The Python cryptography module, pyca/cryptography, has mostly been a sane wrapper around a pile of C, so that users get performant cryptography on the many, many platforms Python targets. Therefore its maintainers, Alex Gaynor and Paul Kehrer, have become intimately familiar with OpenSSL. Recently, they declared that after many years of trying to make it work, they announced pyca/cryptography would be moving away from OpenSSL when supporting new functionality and exploring adding other backends instead. We invited them on to tell us about what has happened to OpenSSL, even after the investments and improvements following Heartbleed. No guests on this pod represent anyone besides themselves.

Watch on YouTube: https://www.youtube.com/watch?v=dEKBHI3rodY

Links:


This rough transcript has not been edited and may have errors.

Thomas: It’s 2026. It’s like the most scrutinized C code on the planet. All of like the low hanging fruit, memory corruption on OpenSSL is gone now, right? Like how likely there was one today.

David: Literally today!

Deirdre: Hello, welcome to Security Cryptography Whatever. I’m Deirdre.

David: I’m David.

Thomas: I am a national spokesperson for the Crystal Hot Sauce company.

Deirdre: Awesome. That’s Thomas. And we have two special guests today. We have returning champion Alex Gaynor. Hi, Alex.

Alex: Hello. Thanks for having me a second time.

Deirdre: Yes, thanks for coming back. We didn’t scare you off. And our other collaborator on the Python Cryptography library, Paul Kehrer. Hi, Paul.

Paul: Hi everybody. I’m just here on Alex’s shirt tails.

Deirdre: Yes, we invited Paul and Alex today, today because they wrote a little blog post about OpenSSL and they gave a talk about OpenSSL a couple of weeks ago on the same topics. And they have a lot of experience trying to operate around Open ssl. As maintainers of the, the Python Cryptography Library, we say the because it’s literally called cryptography. And so if you want to redo anything with cryptography in Python, theirs is usually the one that you go for. Alex, tell us what prompted you to write this statement, which is also the first statement from the Python cryptographic authority, aka YouTube.

Thomas: Before you do that, can I ask one or both of you to explain to, you know, the world why pika cryptography is, you know, like why you guys have standing to make these complaints, like, who do you guys think you are? Hi.

Paul: Sure, I’ll start with that. And Alex can jump it as he sees necessary. So the Python Cryptographic Authority is a self proclaimed authority. It’s important to note that inside the context of Python there used to be like a running gag where you would create an authority to define basically the GitHub namespace that you were going to use. So the Python Packaging Authority, which became a very official concept in Python, was originally actually the creation of a single person who wanted to go and do some work. Similarly, the Python Cryptographic Authority was founded in kind of an aspirational MANNER Back in 2013, like Alex and I were working for the same employer back then, and we looked around and discovered that there was not really a good solution for cryptography in Python. In Alex’s case, he was also interested in pypy support, a thing that he regrets to this day. But the outcome of that was ultimately that we, you know, we’ve embarked on a 13 year and counting adventure of taking over the world of cryptography in Python and largely We’ve been successful mostly because we have a.

I mean, I’m sure we’re going to get into this, but we have a somewhat maniacal focus on the way in which we deliver the software and the way in which we. The expectations we set for ourselves such that we can and do have higher expectations for the things that we depend on.

David: I can’t emphasize enough how in like, pre 13 years ago, so like 2011, if you tried to make an HTTPs connection from Python, it was effectively impossible. You at best, like, had to install PI open ssl, which didn’t have wheels, so it had to build OpenSSL from source every time. And so. And that. And of course OpenSSL would fail to compile because it was 2011. So it was just bad.

Deirdre: And for those who don’t ship a lot of Python libraries, wheels is a technical term of art. It’s not just, haha, wheels included, like batteries included. What are wheels?

Alex: Yeah, wheels are Python’s binary package artifacts. So you can have an sdist, which is source that you compile yourself, or a wheel that’s pre compiled by somebody else. So we, we build wheels of cryptography for many, many operating systems and CPU architectures.

Deirdre: This is a language called Python. Why isn’t it called like eggs? Or like.

Paul: They were called eggs.

Deirdre: Oh, God damn it. Okay, and now they’re called wheels because that makes sense. All right.

David: Alex is the first person to be asked the same question twice on this.

Alex: Podcast in two different episodes. Yeah, you get your wheels at the cheese shop, which. The Cheese shop was the original name of the Python package index.

Deirdre: What? Okay, all right.

Paul: It’s because Python was named after a Monty Python. After Monty Python and Cheese Shop is a famous sketch.

Deirdre: Now I’m remembering our whole episode that we did this whole rigmarole with like. No, we’ve been going strong for several years now. Forgive me for forgetting, but. Okay, so I was going to, I.

Thomas: Was going to say. Right, so like, the, the situation with like, like your biggest dependency is open ssl, right?

Paul: Correct.

Thomas: Yeah.

Paul: Right.

Thomas: And like, in the universe of things that use open ssl, like, that’s gotten a lot more complicated over the last 10 years as Chrome has shifted to BoringSSL and AWS now uses their own kind of formally verified SSL and OpenBFD, the Libra SSL. Like in the universe of OpenSSL consumers, where do you guys fit in? Like, what’s your rank?

Alex: I don’t know that there’s an official leaderboard of like, OpenSSL consumers, but I guess we have A lot of standing at the very least in the Python ecosystem. We’re consistently one of the most downloaded packages on the Python package index. Many millions of downloads. Like, if you use a Python thing, it’s like quite likely you rely on us. All of the major clouds command line interfaces include us. The cert let’s encrypt client includes us.

Thomas: Oh, hey, yeah, in the same sense as like if you’re writing a Go program and you use. Net HTTP, you’d be pulling in go’s TLS libraries. In Python world, if you’re doing the equivalent of, if you’re doing requests or whatever, you’re effectively to do TLS pulling you guys in.

Alex: No, so TLS is the one thing that is, I think in the usual cryptography toolkit that you probably would not pull in Cryptography for. We don’t have TLS APIs. You’re much more likely to either use the Python standard library SSL module or PyOpenSSL, which, which we also maintain, but is a separate library.

Deirdre: Oh, wow. And just for the record, why aren’t you just writing your Python cryptographic library in Python and shipping Python?

Paul: That would be very convenient. Unfortunately, it would have both security and performance implications that are effectively insoluble inside Python the language.

Alex: I will also say when we started cryptography, you know, 12 or 13 years ago, we made what amounts to like a very ironic deal, which is like I was persuaded that like I would help coordinate this library as long as the one thing we were doing was not implementing cryptography. We wanted to think hard about APIs. We wanted to think hard about what made a safe default and what shouldn’t have a default at all. We wanted to think hard about testing, but we were not trying to do something where we felt like the relevant expertise was like really outside of, you know, what a person might have if they didn’t have like a PhD in cryptography.

Deirdre: And you’re talking like low level primitives because like we can have a wonderful debate about what counts as implementing cryptography. But yes, I know what you mean.

Alex: Yeah, I would say about six months after we, you know, had this conversation where we’re not implementing cryptography. One of our maintainers at the time, I guess this would have been right after Heartbleed came up and said we should implement tls. And I said, what happened to our deal? And he said, TLS is a network protocol, it’s not cryptography. Sure. And I think that’s one of the. I think it’s like a very true statement that like really rides the line.

Paul: Yeah. And we’ve enclosed. We’ve definitely increasingly blurred it over time. I mean, I think one of the first times we kind of said, oh, this isn’t really cryptography was when we did like AES key wrap support back in the day when like OpenSSL’s implementations of it weren’t very good. And like, we still strive not to do it unless we have to. But there are scenarios where we choose to hoist it into ourselves because we believe we can do it better. I think the most recent example of that is actually something that we haven’t released yet, which is the HPKE support. OpenSSL does support HPKE, but we have chosen to implement it ourselves using OpenSSL’s primitives.

Deirdre: And why did you choose to do that?

Alex: So I think there were a couple motivations that come into it. One, it means you’re going to get a really consistent HPKE implementation no matter where you get your cryptography from. So like, the OpenSSL HPKE implementation is not supported on LibreSSL, BoringSSL, AWS LC, which I guess we’ll probably mention a lot in this conversation. So it’s called a forks. So like implementing it ourselves means you get a consistent experience because they’ll all have the underlying cryptography. It means we have a lot more control over kind of the compatibility surface. We’re not accidentally pulling in lax parsing behavior or other kind of unintended behavior from open ssl. And it gives us the kind of high level of confidence that we’re handling all the edge cases from both the security and correctness and just like not crashing perspective. Paul, I don’t know if there were more motivation for you.

Paul: Well, I would say that the one that’s kind of the elephant in the room that is the core component of. Part of our criticism that occurred in our statement is that the HPKE APIs are only accessible through the OSSL param APIs.

Deirdre: Oh, goodness.

Thomas: So, okay, I want to stop you guys there. Right, so we’re kind of beating around the bush and I think trying to let you guys introduce the talk that you guys did yourself, but I kind of don’t want to do that. Let’s just cat out of the bag, you guys. You’re not super happy with the current state of open ssl, but it’s like not for the normal reason that people usually bring up. Like, you know, that’s why there’s Libra ssl, because we don’t trust the code or whatever. Although we can get into that later too. But for other reasons. Right, so.

And your reasoning is mostly about or is entirely about the Design of Open SL3, that version of the library. So I have not paid close Enough attention to OpenSSL to know what the fuck OpenSSL 3 is. What’s roughly the timeline here, what happened? Like I knew OpenSSL back in the Heartbleed days and now there’s multiple OpenSSLs.

Paul: So, you know, Heartbleed happened in 2013. And at that time it became like a very well known fact that OpenSSL was kind of a project on life support that didn’t have the right resourcing and had various problems because of it. In the wake of that, OpenSSL got a surge of investment, both monetarily and also in human time. Like there were a bunch of folks from Google, like Amelie Casper and others who went over there and did a bunch of work. There were folks who later founded some of the forks that were involved heavily. And then there was just like a large organization formed that had a bunch of full time employees to do things. As a component of that, the business realized that part of the way in which you sustain in the cryptographic library is by catering to business interests around things like fips. The structure of FIPS is such that you want to isolate it off.

And that created a concept that they were, that was supposed to replace engines. Engines are a method of plugging things into OpenSSL. Providers became the new thing. Providers are intended to be a superset that can do all that stuff. And in the abstract it’s a fine concept. However, in practice, as they went down the path of this OpenSSL 3, which was a deliberate ABI breakage, they skipped two because they were worried about version numbering based on the fact that Libre had forked themselves and they called themselves two at the time they went to three. And the consequence like the actions of three were basically rewrite the entire internals without understanding the surface area of your own project. This led to an 18 month alpha beta phase before release and ultimately well over two years of delays from when they expected to be able to release it.

Thomas: So when did that land? Like when did OpenSSL 3 become. I assume that right now OpenSSL 3 is like the mainstream version everyone’s using.

Paul: That’s correct.

Alex: If you get your OpenSSL from like an Ubuntu or Debian or Red Hat, you’re going to get an OpenSSL 3 from a recent Linux distribution and then that landed in 2021.

Thomas: Okay, that makes sense to me. So now I know what you’re about to say, but let’s bring our audience up to speed on the suite of concerns that you guys have about the situation.

Alex: Yeah, so we’ve got a couple and I’m going to say them in the order that we wrote the post in as we’ll get into. I think we put them in the wrong order in the post. So the first concern, it’s the most easy to Quantify is performance. OpenSSL3 had some really, really significant performance regressions. Things like loading elliptic curve public key from like a subject public key info format, really simple format to parse. Got something like 8 times slower between OpenSSL 111 and OpenSSL 3.

Deirdre: Was it doing the on curve math? Was it checking that? It was like. No, it was just parsing.

Alex: Yeah, it’s like anything you can come up with that’s like the cryptography, the math that might make this time is not it. It was that the abstraction for how providers worked and like the interplay between it and the DIR parser and just like what the public API like all that back and forth had so many indirect calls, so much allocation, so much locking. OpenSSL had a whole like format auto detection thing that happened. It was like, you know, you find the issues in the OpenSL Bug Tracker where folks are like breaking down, like where is the time going? And it’s truly just that the, the parsing itself had become so convoluted that, that it was just, you know, time was spent in nonsense places. We’ll say since then OpenSL has made some improvements and now it is only 3 times slower than it used to be as of the last time I measured. So like that’s, that’s quite significant. Like I want to just give people like a data point for like how extreme this was. We have our own X509 like path validation code. Whoops.

People would call like X509 verification and doing our own public key parsing that is moving the public key parsing from OpenSSL to our own Rust code. No other changes. Was a 60% performance improvement on end to end x509 validation. Like that’s just like how extreme this overhead was to like what is empirically possible. So this was kind of the first and most easily quantifiable. The performance was insane. And I think, well, maybe we get into this more.

A lot of people think that this is like maybe our biggest complaint because it’s the easiest to quantify, but it’s not. We shouldn’t have put it first because our real complaint is the complexity that led to the performance regressions. The fact that the provider APIs were designed or evolved in such a way that the abstraction boundaries were unclear and you had really extreme performance regressions that came down to things like if you load a hash algorithm through a function like EVP SHA256, which is like get the SHA256 like hash object identifier. Like that is slower because what that API is doing is getting like an object that represents like a future promise that I will call the provider API later to like actually find the SHA256 API from your provider. And like that can change at any time in theory. Cause the provider APIs don’t say once your program’s initialized, you can’t change where the cryptography comes from. So there’s just tons of back and forth, lots of indirect calls, lots of allocations, lots of locking, lots of caches to try to compensate for that. The degree of complexity in the internals got really extreme, but it also got pretty extreme in the public APIs.

Paul: Yeah, I mean I won’t add a lot here other than to say the ultimate outcome of these attempted fixes for performance regressions based on this is that we have full on RCU code which has had bugs in OpenSSL since they landed it to just to try and resolve some of these issues. And like the nature of the DIR parsers are actually that it chains things together. Like it’ll progressively try different things. Which led to a bunch of bugs during the OpenSSL 3 betas where they were leaving errors on the error stack because they didn’t know they were doing these chained attempts to parse things.

Deirdre: What they were doing different, like different implementations of dirt parsing and they would like try one and then if it didn’t throw it would try another one.

Paul: So OpenSSL has like these auto keyloader APIs where it’s like give us some DIR and we’ll figure it out and give you back what you want. But that like necessarily means that it’s going to try a bunch of things rather than simply reading the OID out of it and then dispatching correctly. So you get a bunch of performance issues from that was that.

Thomas: I have two questions. Right? So first of all was the kind of the YOLO mode just try all the different formats key parsing thing, Was that an OpenSSL one thing or did they come up with that in OpenSSL?

Paul: OpenSSL one did have the same behavior on some limited APIs. Like it’s like D2i auto private key or something of that nature. So that behavior was there, but it was a much faster one in that it dispatched based on the OID at the time, as opposed to this one, which needs the providers and every provider doesn’t. There’s not a mechanism for providers to necessarily declare exactly what they support, so you have to just incrementally try them until one works.

Thomas: I’m a little familiar with the OpenSSL code or the OpenSSL code of old. Right? I don’t remember concurrency being a huge thing in that.

Deirdre: Yes.

Thomas: So you said there’s like a full on implementation of RC use now and open Silvery. Like what is that? You guys would know way better than we would. What is the concurrency situation? If this is a crypto library.

Alex: Yeah, it’s a crypto library. But like you have like various APIs for manipulating like global states, like adding a new provider to like the global context, for example. And so like if you haven’t done anything to preclude like that happening at arbitrary points in the runtime.

Thomas: Wait, wait, wait, wait, wait, wait, wait, wait. Why am I, why am I adding new providers to the runtime state of a running program?

Alex: I mean that’s roughly the question like Paul and I would ask. Like, you think that’s just like a design mistake, that you can support that, but like if you’ve chosen to support that and are now like going back to like, you know, you don’t, you’re not quite sure like what your API contract with users is like and now you have to deal with like concurrency bugs. People are reporting like Tsan issues or like performance issues. Like now like, yeah, like hold on.

Thomas: Hold on, hold on. RCUs are what you use when you’re locking so much and so many hot paths that you can’t actually do new taxes that you actually need an optimized concurrency primitive because you’re, I get like you need a lock because you can add a provider. Can you like add and remove providers in a hot path?

Alex: Nothing precludes you from doing that.

Paul: Yeah, there was not necessarily like, I want to be charitable here. So like maybe the answer may not be that they didn’t have a design ethos around it, but like for whatever reason as they went down the path of implementing this, the answer of support everything at all times became the correct answer. And like that led inexorably to these.

Thomas: Types of choices because engines in open ssl, that was there for like people doing like crypto card stuff, right? Yeah, at the time it was for like people with co processors, accelerators. Right. So the idea here is like you’re supporting a use case where somebody has like you know, a card or something like that they’re inserting and removing like several times per millisecond is why you would need RCUs.

Paul: I think the, I mean the RCU component is just the what comes out of the fact that they have to do locking in so many places just to check the actual reality is like they as best I can understand it again steel manning the concepts of the provider. There is a design goal in OpenSSL that it is an abstracted substrate upon which anyone can do anything including adding features that were never considered by the OpenSSL people. Like basically LinkedIn to any arbitrary program. Right. So like oh in the future, 10 years from now I can load arbitrary providers into some old piece of kit and get new stuff.

Alex: Yeah and like you do see examples this like before OpenSSL had post quantum crypto algorithms in it there were third party providers that would provide them. And so like you could get an ML chem inside Open SL before Open SL supported it. Like I think we would say like this is just like not the correct allocation of resources to like towards your problems.

Deirdre: But you know it makes me so nervous. Like I, you know I don’t have, I don’t maintain a project like this. Like I’ve maintained large rust projects but.

David: Like no, like Lib SDL works kind of like this. It’s designed to both be statically which it’s a, it’s a like cross platform graphics library that’s designed to be statically linked but also like sometimes it gets.

Alex: Swapped out and replaced with a different.

David: One because you know, maybe Valve like made it run on Linux but wants it to look like it’s Windows to trick some game from 10 years ago to run on Linux and you can kind of see how you end up there. I don’t quite understand what the like surrounding ecosystem is where you need to do this for cryptography as opposed to like games which are I was about to say notorious for being developed once and then left on for years but then I just realized I was describing HSMs.

Alex: Yeah, I mean I think what I would say is that like if you really sat down with a we want to make more things pluggable like I think there is probably a design you could get for providers where you put state in the right places. You have like the indirect function call in kind of the right place in the stack that balances the complexity and the performance and the maintainability of the system. I think there are useful points on the trade off curve that are not just everything is static. But I think the point OpenSSL ended up on, where the SHA256 or the result of EVV SHA256 is an abstract object which will call something on the provider later and that’s allowed to change at arbitrary points, is like not a particularly useful point on that trade off curve.

Deirdre: Yeah, no, God.

Thomas: I was, I was, I was set off on the whole RCU thing. So I was wondering, it kind of made sense to me that if they did a whole bunch of new concurrency stuff like that’s a way I could see you getting like a 6x performance regression. Right. Because. Okay, you’re just like, you know, you’re contending on locks or something like that.

Alex: I mean, and there certainly are a lot of locks, particularly in the earliest profiles.

Thomas: Okay, but it’s not your sense that. The vibe I get from you is that it’s literally just calling through bullshit indirection code. It’s not waiting on locks.

Alex: I don’t want to say there’s no locks anymore. The point of RCU is now you’re spending less time waiting on the locks. But like I said, it’s still 3x slower than what we think is a reasonable baseline for parsecs endure.

Paul: And I will note that I think we’ve fallen to the same trap because of the performance stuff is so easy to talk about. But again, the performance is not our critical concern here. It is the complexity that led to that issue.

Alex: Yeah. Which I think it’s worth building on. Kind of the complexity in the public APIs is like a really important other thing that is honestly maybe the most pressing thing for us.

Deirdre: Yeah. You mentioned OSSL param and like, you know, I’m poking around in there in OpenSSL for post quantum stuff and I’m seeing this all over the place. Where does OSL param come from and why is it there?

Alex: Yeah, so OSL param is another one of the kind of new APIs from OpenSSL 3 and it is effectively many public API functions. Instead of taking a list of arguments to a function, take an array of OS cellparams, which is basically an array of key value pairs. So instead of passing, I don’t know if you’re calling like Argon 2, like you’ve got a key derivation function. Like you’ve got your key material, you’ve got your salt and your salt, maybe a length because it’s C. So you got your pointer and your length. Fine. And your desired output and the rn and I can’t remember the third parameter’s name.

Deirdre: Sure.

Alex: Instead of. You pass each one of those to a function as arguments. The way this works in OpenSSL 3 is that you create an OS cell parameter and it’s got, you know, you know, string key, pointer to the key string, salt pointer to the path, salt len and like that is like how it works. You know, you’ve got types for each of the values because like you wouldn’t want to.

Thomas: Yeah, I was, I’m looking at that now. I’m looking at the CLAUDE summary on this now. And it’s like integer unsettled integer UTF string octest. Does it really have octet string?

Alex: It does. You want to pass like arbitrary bytes. You think your salt should be like UTF8.

David: I mean it sounds like they’ve created C JSON.

Deirdre: Oh goodness.

Alex: Yeah.

David: Or the C JSON TypeScript interface.

Alex: Yeah. And like, you know, again, to like try to steel man, this like our understanding of like, this is part of just like the make things very abstract theory of like, well, you could write a program that like, I don’t know, has a configuration file and like read some parameters from it. And that would mean that somebody could bring a new algorithm and new types of parameters and you’d never have to update your program because it all flows through this abstract OSL pram. In practice, our experience for the things we are trying to do is it means it’s very difficult to tell what arguments the function takes. It’s very difficult to tell you’re passing them correctly. You are losing a whole bunch of static type checking that you would normally get from a computer program. It makes things slow and it makes the OpenSL code like much more complex. Right.

Like you think like, oh, I have the name of a variable that’s not given to my function. Like that’s just like clearly much simpler than I’m going to go root around in the array. And in fact, many C Source files in OpenSSL source now have a custom Perl preprocessor to like make dealing with like these simpler.

Thomas: Oh no, wait, wait, wait, hold on, hold on, hold on. In the new OSSL_PARAM world in OpenSSL 3, yeah. None of the cryptography interfaces are type safe anymore. They all just take abstract arrays of parameters.

Alex: All the checking is none of the new ones. Like they’re still old APIs.

Thomas: It’s all runtime checking now. They’re just like, okay, correct.

Deirdre: Yeah, so we’re going backwards.

Thomas: Yeah.

Paul: Any new interface like EVP KDF and EVP aead, those are all interfaces that now require OSSL per ram, and almost any new feature added does require it. We’ve spent a lot of time and energy, like, trying to not use osslparam except where necessary. And I think we currently have two places we consume it, but we actually abstracted away by pushing it into rust. OpenSSL.

Thomas: Let me steel man this design. All right, what if the idea here is that no normal person is ever supposed to use this interface, which is the only way I can think, the only way I can think to describe a cryptography, a cryptography interface where, like, the IV isn’t type safe at compile time.

Deirdre: Instead you’re saying me and my collaborators are not normal because we did just that the other day.

Thomas: Hold on. I’m saying only two people in the world are ever supposed to consume osslperam, and it’s Alex and Paul. And what you guys are supposed to do is take that and then turn it into something reasonable.

Alex: So, I mean, I think what I would say is like, if you, you like, you’d come to the idea that like, your internal APIs needed to have this for whatever reason, right? You needed, you know, more flexibility in like the provider API, because, like, that ABI had to be the same for forever. Like, if you had like a theory of that, like this needed to be kind of your internals. I like, I think the. I like, I. I’m not sure I would ever reach that conclusion, but like, if I did, it seems like what I would want to do is like, have public APIs that are like, entirely like type safe and like, construct these things internally and it’s just like, not the case. Like if, for example, if the thing you would like to do is configure OpenSL to do elliptic curve signatures that use deterministic nonsense like specified in rfc. Like the way you do that.

Thomas: Yeah, hold up, hold on a second. You’re telling me that in the new system, if I want to do GCM or something and I’m passing a nonce in, it has to do a string compare to find the non key.

Paul: Correct.

Alex: I think in practice there’s old APIs that were type safe for things like doing a symmetric encryption.

Paul: But I would say, Alex, they’ve been significantly re implementing some of the old APIs using the new APIs so that underneath the hood they generate OSSL params, and then there’s definitely a string comparison.

Alex: So that’s better than making me know about them. Because literally, if you want to do an elliptic curve private key signature and you want to use a deterministic nonce RFC 6979 the way you do that is you create an OSSL params with two entries, one of which is a bool indicating true and the other is the marker for like. This is the end of the params and you pass that to evppkey setparams or whatever the function’s named. You can’t just pass bool to, you know, enable deterministic nonsense.

Thomas: Like you create an array, it’s like they’re implementing Ruby.

Alex: Yes, like, so like I used to be a programing languages person. Like I, like Paul mentioned, like, I got my start working on cryptography stuff because I was working on PyPi, the Python implementation. And like, yes, this is what it looks like if you’re like writing an interpreter and like, yeah, you know, a user can define arbitrary like functions. Like, so you have to have an array of parameters. Like, yeah, this is one interpreter. It looks like. But like, you don’t get to get.

David: Out of this discussion without noting you also worked on pyruby.

Alex: It was not called pyruvy, it was Topaz. But yes, yes, I created a Ruby interpreter.

Paul: I will say every time I end up doing OSSL_PARAM things. What it feels like to me is actually like Apple’s design aesthetic from Next Step era, like NS mutable dictionaries everywhere with all the same sort of challenges where like, maybe the, like, maybe the principal, like golden path has been tested, but God help you if you pass anything out of the ordinary. Who knows what’s going to happen. Which I think I should segue nicely to testing.

Thomas: I got to say, passing in as parameters to a function, a dictionary of random string keys and values does feel very Pythonic to me.

Deirdre: But it’s, but this is the part that’s in OpenSSL. It’s supposed to be a C library.

Alex: Yeah, they just renamed OSL param to Paramount.

David: Just rename OSL param to kwarg and suddenly you guys would be all over it.

Alex: I mean it’s not really our design aesthetic for Python. Like, you know, I think we have like Pretty strongly typed APIs within the Python world. But like, yeah, like that would be a recognizable like dynamic programming language aesthetic. That is like, definitely true. I just don’t see why you would bring that aesthetic to C. Like, that’s not C’s problem.

Deirdre: So all this dynamic stuff and lack of static validation and verification seems like it would make it hard to test or at least harder to test.

Paul: So we definitely, it’s definitely a difficult thing to Test. It is also the case that the OpenSSL project was founded in the 90s when aesthetics were different. However, over the course of the decades the OpenSSL project has been around and including the time now where they are a very well funded organization with more full time engineers than work on any of the forks to our knowledge.

Alex: They have a foundation and a corporation.

Paul: And a Corporation and 17 different interest groups at this point they they still struggle with testing. So like Alex and I consistently joke that like the Python Cryptographic Authority is a CI engineering project that incidentally produces a cryptography library. And part of the reason we make that joke is that like it reflects our real belief that like that type of investment in continuous integration and testing pays dividends in terms of like software engineering velocity and the quality of the product we deliver. We spend so much time on it that like it can almost make the other work. We do seem trivial. Unfortunately, the OpenSSL project, I mean we’ve worked with a lot of these folks, we like these people. But like it’s important to note where struggles continue. And one of those things is that despite all this time, the OpenSSL project does not prize testing in the way that we prize testing.

We have seen— there are many ways in which you can judge this, but one of the ways is fairly prominent is go and look at any bug fix or new feature set that lands on OpenSSL and look to see whether or not there are tests. Now, new features typically yes, bug fixes frequently no. And if you ask about them, they won’t say there shouldn’t be tests. But they may not happen. They’re not prioritized in the way that you would expect at a project like this. Similarly, you have a large CI matrix. Alex and I spend a lot of time and energy making sure our CI matrix is clean and fast because not otherwise is very painful. In fact, as A related note, OpenSSL shipped the bug fix release today, which meant we shipped the release because we statically link OpenSSL in our binary artifacts, the wheels, and because we’ve spent two weeks with Windows ARM64 builders failing, we removed Windows ARM64 support. That’s how serious we are about this sort of thing.

Alex: Failing not because we landed a regression, but due to a platform issue in GitHub’s Windows ARM 64 runners.

Deirdre: Yeah, it was because of the runners.

Alex: Correct?

Paul: The runners issued their platform. It’s not the first time we’ve had issues. This is not a conversation about how Microsoft owns GitHub, and yet somehow they don’t. Prioritize Windows ARM64 at all. But like, we’ve had enough issues and that, like, we gave them a lot of time, but we care very deeply about our CI working and being performant. And so we were willing to remove it even though it is painful.

Deirdre: That’s indefinitely until those things can work again.

Alex: Until they work and we have confidence they’ll say, working.

Paul: Yeah, okay, exactly. I need a track record behind it because fixing it is fine, but like, I need to see it working for an extended period of time and that like, they actually respond in a timely fashion to future issues.

Deirdre: Yeah, if failure is just flapping all the time, then it’s a noisy signal and it’s not a useful signal and then it’s just.

Alex: Yeah, flapping it all the time is like a useful segue into like. That is a big problem with OpenSSL’s CI. So, like really the apex of this was OpenSSL 3.0.4 had a buffer overflow in the RSA implementation for AVX512 capable CPUs. And in fact this failed in CI sometimes when the CI runner happened to be allocated on a machine with AVX512. But it didn’t always doing a different.

Deirdre: Architecture, like it was randomly GitHub Action.

Alex: Well, so GitHub Actions doesn’t guarantee like which CPU class you’ll get. And so like sometimes you built on a machine with AVX512, sometimes you didn’t. And so like when you were on AVX512, your tests probably failed. Like it’s a buffer overflow. So there’s always an element of luck. But it was not noticed because tests were kind of always flaky. And you know, I think this reflects two issues. One test being flaky all the time, which is a really persistent issue.

Like the day we wrote our slides for the original talk version of this, five of the ten recent commits had failing CI checks. And when we checked again the day before we get delivered our talk, every single commit had failing builds for cross compilation. Just like the first issue, just like lack of really prioritizing stability. The second issue is like lack of investment in the kind of infrastructure for like Open SL is a project with lots of like, per platform assembler, per platform assembly. And intel in fact offers a tool called Intel SDE which basically lets you dynamically toggle CPU features on and off. So you can, you know, simulate a CPU without AVX512, a CPU without SSE3, and so you can in fact test all of the combinations of assembly you have and There are forks that use this in their CI to verify all of their assembly. And you know, this is not present in OpenSSL CI. And like, we think that’s a real miss. Like, that’s just a missed opportunity.

Deirdre: Like, I’m, look, I’m, I was, I was stunned that you’re like, you can’t make sure that you get like an AVX backend runner in GitHub Actions. And like, the answer is, okay, maybe you can’t, but that means you put your AVX specific stuff behind a flag, you put online your own custom runner, you rack your own hardware, or you pay someone to rack the hardware and then you have the flag so that when you do allocate to your custom runner that always runs on avx, then you unflag your flag and then you test your AVX specific implementation. I know lots of projects that do this.

Alex: Yeah.

Thomas: And now a word from our sponsor, Fly IO, provider of AVA Scapable, and from the official hot sauce of security cryptography, whatever. Crystal Hot Sauce.

Alex: What a salt brand.

Thomas: It’s also. You’re thinking of a diamond. Diamond Crystal, maybe.

Deirdre: Yes, yes.

Paul: I’m going to be very disappointed if that doesn’t make it into the podcast. So, like, I mean, there’s, as you’ve noted, Deirdre, there’s a bunch of ways where you can slice this such that you get the type of testing coverage you want. It’s also the case that OpenSSL is at this point a big tent with a wide variety of supported things and they actually do have leverage in this ecosystem. So, like, if and when someone comes and says, I would like you to land like architecture specific assembly for my pet architecture, it would not be out of the bounds for them to say supporting that in our system looks like the following. And frankly, it would probably look similar to what Alex and I have said in the past around PowerPC 64, a little endian support, or Windows ARM 64, which is you will provide ephemeral runners that maintain no state and integrate into our CI such that we do not have the responsibility of managing them and they will work in the following fashion.

Deirdre: And if they do not, we drop you.

Alex: Yeah, I mean, give a real concrete example. There’s an open bug right now against OpenSSL for the assembly, for Spark for doing I don’t even know what. And it’s, it’s got some bug in an optimization it has. And you know, the OpenSSL folks have basically said, like, look, Spark assembly is not maintained by the Open SL core team. Like, this is community Maintained, like what I would encourage them to do is say like, this is a bug, therefore, like we’re disabling this optimization. If like the maintainer of the Spark platform wants to like contribute CI or contribute fixes, like, we will accept those. But like, we don’t want to ship this buggy thing and we don’t want to give users like the impression that like, you know, give them, like, we don’t want to carry all of this performance sensitive and buggy code. We would rather ship the slower thing that’s guaranteed to work and put really the onus on.

Like, if the SPARC owner wants that to be a fast OpenSL supported thing, like they should do the work. And so like, you know, if the OpenSSL project like pushed on things like that, we’d be very supportive.

Deirdre: We haven’t even touched on all the work that the Python Cryptography authority has done on moving towards doing a lot of the riskiest stuff in a memory safe language like Rust. But like you, just the two of you and your project did a ton of stuff on your own and we’re.

Thomas: Not, you know, it’s 20, it’s 2025 at this point.

Deirdre: It’s 2026.

Thomas: All of it’s 20. I’m sorry, it’s 26. 2026. It’s like the most scrutinized C code on the planet. All of the low hanging fruit memory corruption on OpenSSL is gone now, right? How likely there was one today.

Paul: Literally today.

Deirdre: The 27th.

Alex: Yeah, we were originally scheduled to record this podcast yesterday and if we had recorded this podcast yesterday, we would not have been able to discuss it. There were several pieces of memory corruption in Open SL that were disclosed today. And like, if we had recorded yesterday.

Thomas: I would have had you with that argument.

David: I’m kind of like with you, Thomas. Like adults understand how to write small bits of C code without like totally screwing it up. Part of that is having the judgment to not write large bits of C code parsers for like length, type, value and you know, deterministically bytes in, bytes out functions. Like we should be able to write in Rust or excuse me, in assembly in C without making huge mistakes. So the people that are working on OpenSSL don’t seem to be able to do this. Many people exist who are capable of doing this. Many of them have been guests on this podcast.

Alex: So I will agree partially with that. Like, folks who are familiar with my work know that like I talk a lot about memory safety as a language level concern and about how in the Long term we need to be looking at C and C replacements or to the extent the C and C standards committees have any openness to make their languages memory safe. But it is absolutely true that you can write small where small is like probably less than a few thousand lines of like C code for well defined tasks that don’t change very often and maintain memory safety. Like that’s, that’s like an observable fact that like BoringSSL is a code base like this that is like a very low rate of vulnerabilities because its maintainers are like very diligent maintainers. They take testing seriously, they just think hard about the changes they are making. But it is also just true that complexity and velocity are real world phenomena. And part of how boring SL is able to be that diligent is by having a very narrow scope that is like roughly the set of things Google exclusively cares about. And OpenSL does intend to, to cater to a larger audience.

It has more features, supports more functionality, and it’s like reasonable and very useful when we compare the set of features OpenSL has that BoringSSL does not. There are things missing that we would like. For example, so complexity and velocity are real world phenomena, Scope is a real world phenomena. And so you need approaches to security that are responsive to those real world constraints. And so we think you just have to have a design approach and like a memory safe programming language is by far the strongest one. To not have certain classes of vulnerabilities, like formal verification is another thing from that bucket for like how do you write certain types of programs very, very safely.

Thomas: I want to talk a little bit more about how you guys are SSL of thesising the open SL library with your own REST code. But before we do that, the thing today was if you give OpenCell a P7 file that is encrypted with an AAD cipher, is encrypted with GCM or whatever the EVP code, the high level OpenSL library, when it goes and tries to parse the P7 file and pull the nonce out of the it’s dir, right? Or it’s BR. Is it BER or DIR in P7?

Paul: It can be either.

Thomas: Wonderful. Okay, when it goes to pull out the non slot of the goofy, right? It spills that goo all over the stack, right? So like that. I guess the first question I have is we were talking about this earlier, but I’m still wondering if this does or does not hit you guys.

Alex: So this doesn’t hit us. While we do have some PK7 APIs that do still use OpenSSL for parsing for reasons we can talk about our PKCS decryption APIs use our own Rust DIR library and you know, totally memory safe parsing, not reachable.

Thomas: This is a little, this is a little off topic but like for somebody who’s asking what the attack surface is for P7 files, what does that look like?

Alex: So pkcs7 is like a pretty widely used container format. Like it pops up in all sorts of places. Like S mime is a pkcs. So like if you’re doing like encrypted email, like not, not like pgp but like, you know what, like what, what Microsoft Exchange will like encrypt your emails with like that’s a PKCS7 format. I think Microsoft’s like code signing format does it like it pops up in all sorts like places like this. Like if the thing you were shipping is like roughly a signature and encrypted blob and like some metadata and like, particularly if it’s like maybe a slightly older standard. Like I think modern, modern cryptographic esthetics or like container formats like this are not particularly useful but like in older things they were like super common.

Deirdre: Mm.

Paul: Yeah. It’s actually. I forget what the underlying weird name for the like standard is, but Apple Pay also actually uses PKCS7 signatures in the backend. But now with OpenSSL, one assumes.

Alex: Certainly there is actually a bunch of boring ssl.

Paul: Yeah, they own their own crypto, but they actually like the underlying APIs for a bunch of Apple’s stuff are actually BoringSSL underneath.

Thomas: I looked today and librassl didn’t have the bug. Librassl in the code where they pull the nonce out is doing an explicit length check. Does boringssll have the bug?

Alex: I don’t believe so, no. Okay, pretty so boring. SL actually published today a whole bunch of notes from past OpenSL vulnerabilities on whether they were infected. What customers of boringSL need to know. And I’m almost positive that this is one of the ones marked like this bug was introduced after we forked.

Thomas: Gotcha. Okay, so like you guys missed this bug. You missed out on all the fun because you rewrote this part of OpenSSL.

Deirdre: I wouldn’t say they’re missing it, Thomas.

Alex: Yeah, we missed this bug there. There were a handful of other bugs in today’s release that, that did impact us. I have to go back and like look, look at the full list to tell you which ones but how.

Thomas: How much of OpenSSL are you.

Paul: Guys going to rewrite and rust anything that’s not cryptography? Cryptography itself, the core crypto. Like we heard that one before we.

Alex: Opened with how that’s false. Yeah. So like we’re. The things I would say is like, we will do anything that is like parsing, that it’s, you know, serialization, deserialization that is like orchestrating cryptography. Like kind of the HPKE that we mentioned.

Deirdre: Canonical encodings.

Alex: Yeah, we’re pretty close to saturated on this stuff. Like almost all parsing at this point. Whether it’s, you know, public keys, private keys, whether it’s like X509 certificates and CRLs. All of that stuff is in Rust.

Paul: Yeah, I think the only path building. Yeah, the only exception at this point is actually what Alex alluded to earlier, which is that because PKCS7 does support BER in addition to dirty, we do have in one code path a fallback where if we can’t parse it using our DIR parser, we hand it to Open ssl because we have not implemented BUR and Rust.

Alex: Yeah.

David: And for those.

Alex: No, we probably won’t. Yeah. For those who are in this, like, Alphabet soup of like BUR and DER just is like causing their eyes to glaze over. The really short version is BER and DER are two different ways of serializing kind of the same ASN1 data. And DER is a subset of bur that like, basically takes away all of the flexibility that BER has. Like Burr will let you do kind of like very bizarre things that makes it like, much more complicated to parse. And like, we basically decided like, BER is a bad idea for all modern cryptographic standards. And like DER is just like, much more compact from a, like, surface perspective.

So, like, we restrict ourselves to DER and like, that’s the only thing you need to care about for things like X509 certificates kind of standards that are less well pended than x 509 is. Maybe I will say it like pkcs7 still have a lot of BER in their ecosystems and we refuse to support BER. And so we will use OpenSSL in the places we have a compatibility need to parse. Brr.

Thomas: How’s that gone for you guys? How’s that? I keep stepping on Deirdre and I’m sorry, but I want to hear the story about how it’s gone. Taking Python cryptography and making it a Rust project.

Alex: Yeah, I think it’s gone.

Paul: It’s like a whole podcast of its own.

Alex: But yeah, and we get it.

Thomas: It was complicated and dramatic and not a great experience is what I’m hearing.

Alex: No, no, I mean, so we, we gave a talk about this at Pycon a couple of years ago that really focuses on like the initial release and like what that migration looks like. But and to be clear, there were challenges and some drama early on where like we.

David: When is early on? Like, how many years ago is this?

Alex: 2022? I want to say that’s not that long ago. 2021.

Paul: No, it’s 2021, Alex.

Alex: I’m getting the exact date because it’s like February 2021. Yeah, yeah, February 2021. We do our initial release. So it’s like five years ago now and the initial release has, has some drama. Like we, we were pretty aggressive in like pushing rust into the ecosystem. Users who were getting. Not getting wheels that. To compile themselves like we’re.

And if they weren’t pinning their versions, like they woke up one morning and like, why is my ansible CI pipeline failing with like no rust C on my path? Like, what is this garbage? But like, we have, with lots of help from like other folks in the community, we have like pushed past that. Like we now we ship wheels for a great many platforms. Like that is no longer a problem. And so like, you know, I think your question is mostly about like, how is migrating our own code been? And like, I think it’s pretty much just been like an across the board win. We have much better performance on like all of our parsing APIs. As we kind of alluded to earlier. We have a much clearer compatibility surface. Like, because we own the parsing, we understand exactly what are the places we’re being lax because like, you know, the specification is like kind of in an HTML style, like diverged from like common practice.

And we know where we’re being strict. We have a much, there’s much clearer abstraction layers between things. I’ll give you a concrete example of. I think it’s just like, better. So we have a bunch of X509 certificate APIs. We used to implement those on top of OpenSSL’s X509 APIs. And so when you did something like sign an X509 certificate, you’re creating a new X509 certificate and you do a signature and you’d pass in a private key to do that signature. But actually that private key had to be an OpenSSL private key.

We nominally had these abstract APIs that you could implement for a private key. But if you didn’t pass an OpenSSL private key, we didn’t have a private key to Pass to the OpenSSL Signature API. So there’s this real abstraction failure and it’s not an uncommon one. You see this in a bunch of things that try to do abstraction layers like this. I believe The Java crypto APIs have a whole bunch of fast paths and slow paths depending on what kind of private key you’ve got. But now that we own x509 parsing top to bottom, what happens when you try to do a sign a new certificate is that we use the public sign API. And like any private key you’ve got, whether it’s one of the ones we provide or you have like a third party implementation of our private key APIs that use it, that I don’t know talks like AWS KMS for example or GCP or like any, any cloud providers key management. Now that just works.

So like we have a much cleaner compatibility surface, we have much more coherent story for things like third party keys. Like it’s just like I don’t have anything bad to say about the migration besides like the initial like stumbling with like the kind of pain of less people having it to adopt to Rust. Paul, I don’t know if you have like a different reflection on this.

Paul: Yeah, I mean I think I generally agree like past the initial teething, like one of the components of that entire project is Alex and I decided that like this was worth the breakage budget. We knew our position in the ecosystem was important. We knew that it was going to cause us pain as well as some user pain. But we wanted to both manage our long term pain as maintainers and also drive down the pain for the adopters as quickly as possible. So we were able to work across the ecosystem, basically blaze the trail such that future Rust Python projects have effectively the ability to deploy with no fear. Where we five years ago obviously had a lot of work to do. On the actual Rust development side, I would say that like the only piece of pain that we’ve really experienced, I’m not even msrv, although MSRV has its own like that’s minimum supported Rust version. For folks who are not deep in the Rust ecosystem, there was some work there we had to do.

But like the only real thing was that in our CI Rust compilation is slower than what we had before. And so we ended up spending a lot of time and energy looking at what it meant to cache intermediate artifacts and make sure that caches don’t do bad things. Because we had a few incidents where our caches were pretty bad, but once we got past that, we were in really good shape. And like, I mean, even right now, there’s, there’s a current feature Alex and I are working on where we were unhappy with the Python APIs that would allow us to express what we needed. And so we’re likely to rewrite this piece in Rust simply so we can have the visibility control that we want.

Deirdre: Yeah.

Alex: So like, I don’t know, if I had to register a complaint, like, my number one complaint might be like, the Rust coverage support is not quite as stable as like Python’s coverage py. This is like the level of like, complaint we have, you know, about five years of like maintaining this Rust code.

Deirdre: So you, you ate the pain of supporting Rust in your. Extremely widely supported. In terms of platforms, a diversity of platforms where it has used and it needs to be supported. Did you have to drop any platform that PI crypto was supported on in order to ship the Rust stuff?

Paul: We dropped no platforms we officially supported. Is a good way to say that one of the tricky bits of a migration like this, and I’m sorry, Alex, I’ll let you go.

Alex: Right, we’re going to say the same thing.

Paul: But yeah, one of the tricky bits about having a project that’s just C in Python is that C and like, Python is not a compiled language and C is a compiler that exists. Like there’s a compiler that exists for every platform under the sun, including things that are weird, like 31 bit architectures.

Deirdre: Yeah.

Paul: And so like, implicitly, when you ship software like this, you end up with consumers at some level who maybe they only want to compile it to prove they can, but there’s a set of folks who are like, we, like, I was able to compile this and therefore it should be supported.

Deirdre: Yeah.

Paul: And so one of the things that did come out of our Rust migration was a much more obvious, like a much more clear and obvious statement of like, we support architectures that have enough support that LLVM has been ported to it. Got it. And one of the, one of the, perhaps most prominent, and Alex and I might be overly patting ourselves on the back for it, but also it really feels like we might be responsible. Is that IBM recent? I guess now it’s about a year ago, but IBM ported Rust to aix and one of their headline messages on the blog post where they announced it was Python cryptography will work.

Deirdre: Oh, that’s pretty nice.

Alex: And like this, this is a very popular.

Deirdre: It’s not Nice, I know, but like the fact that they like, they are that good.

Alex: Positive sum interaction is the way I would say it. Like IBM gets better support for whatever customers AIX has. We can point users at aix. You like show up at our issue tracker and like IBM, like they maintain your stuff and like, I think this is like a good message to like, projects to like when you get requests for like weirdo, particularly commercial operating systems or architectures, like, push back and like try, try to make people go to like the company they have a support contract with and not like pawn it off on you. You know, if you, if you’re not interested in maintaining that.

David: Let’s, let’s talk about like that aspect a little more before we get back to like broader points about OpenSSL. Both you, Alex and Paul have been doing like open open source, I say in quotes, for, you know, like 13 years here. You have more than that. But this specific project, it’s very widely used, but. Well, I don’t know Paul as well, but Alex seems like a fairly emotionally stable person and like you see a lot of discourse around open source of like, oh, there’s all these like people freeloading off me. I don’t enough time now. I hate this project. But at the same time it kind of look over at you guys and you’ve been able to both kind of keep some amount of support for like a lot of, or keep a lot of support for a lot of platforms.

David: You’ve been able to get intel to do or excuse me, or whoever to do aix and like you’re, you’re still chugging along and to some extent like the Rust thing, like, you chose to do that in Rust in part because of your like, personal preferences around memory safety, which I agree with. But like, you could have also just been like, well, we’ll control our own parser by writing C code that doesn’t like, suck. So like you’re able to like kind of push your personal opinions into the project and, and kind of have fun without dying. Like, what’s your approach to this that like, lets you both kind of keep doing this, but also like, doesn’t necessarily result in either the drama or burnout that you hear a lot about in open source. And how do you feel about like, like are you being funded sufficiently for this? Or like, like how does money fit into this, if at all?

Alex: So very early on in the project’s history actually like, Paul and my employer, like gave us both time to work on it. That’s close to 10 years ago at this point for Me, maybe It’s more than 10 years ago. So it’s, it’s, you know, for the last 10 years at least, there’s been a, you know, labor of, you know, my personal time. As for, like, I guess how we think about it, I think, you know, I’m not sure we have like a documented philosophy of this or something, but like, we would like working on this project to be sustainable and enjoyable and a product that is like, you know, advances things we care about in the world. Right. Like, we think, you know, cryptography, the ability to build secure systems is important. Like, we’re, this is like a positive contribution. If we can make that easier in the Python ecosystem, we can advance memory safety.

That is a good thing for the world. And so part of what you hear when you hear us talking about the importance of really robust testing is that for a thing you work on in your spare time, it’s really, really valuable for the way things come to you to be predictable and not emergencies. For example, it is a thousand times more preferable to like, spend some time, you know, getting a PR to green because like, I control when I’m working on that than to have like a vulnerability reported to us. Like, you know, if you’re like a company, you talk about this in terms like shift left and like your developer productivity, but like for in your volunteer project, like, what I’m saying is like, it’s really good to like not have a vulnerability get reported. Like, when I’m busy with life, like, I can, you know, I can take a week or two off this project because it is stable and like, that’s, that’s really good.

Deirdre: And the stuff that you’re working on, the time is not like just trying to debug a flappy CI or like trying to like wasting time just trying to make something functional and workable as opposed to like, hey, here’s a new feature that people want, I need to go ship it, or a new primitive or making something better.

Alex: I mean, we put a lot of time to CI. Like, you know, I don’t know if it’s literally the majority of our time, but it’s really plausible that it’s the majority of our time. But it’s almost just overwhelmingly like proactive time.

Deirdre: Exactly.

Alex: Improving it so that like, when there’s a feature we’re excited about, like, we’re just working on that feature. Yeah, that’s the thing you can schedule and I mean, David, to like, question about like funding. I mean, the way I think I would say, like, particularly for the last 10 years is like, I think working on this project has been incredibly like positive some and I think it’s a concept that like doesn’t get. It’s due these days just in the sense of like we think we provided a thing that’s like valuable to many companies. And even though it is the case, like at least I have not been paid for to work on this in quite a while. Like I’m like very confident. I’ve gotten lots of opportunities, whether it’s to travel and give talks or meet people or just professional opportunities because like I work on this like that, that is for me like an exchange of like, you know, my time and like for like remuneration that like really seems fair and like very positive sums. So like, I don’t know, I don’t know. I don’t have any complaints. How are you feeling, Paul?

Paul: I mean I think I found that like over the course of 13 years, like again, yeah, I got time for my employer when we first started. I have not gotten time since I left that employer. So it has been 10 years roughly since I was quote paid to work on this project. We, I mean, I think a lot of the sustainability for this is that like Alex and I are a small team, right? It’s two primary contributors to this project. Which means that when we want to make major decisions, this is not like a large scale, long term effort. It is a conversation between two people who largely think very similarly. That means we can make big decisions, we can execute against them without a lot of bureaucratic time or a lot of politicking to determine what’s good or bad. Now that has its disadvantages too, but so far inside this project that they’ve been largely advantages.

It is also the case that we have built the project such that when people come and they ask for something unreasonable, we feel comfortable pushing back. We feel comfortable telling people they should not be, they shouldn’t behave certain ways in an issue tracker when they behave in ways that are inappropriate. And we built the system so that it’s enjoyable to use. One of the unofficial rules of thumb Alex and I have used for a long, long time now is anytime the CI exceeds 10 minutes, we spend time on it because it annoys us. And so like those types of things just make it continuously pleasurable to work on. It is also the case that it is a large bully pulpit, right? Like we have a lot of influence and we are cognizant and respectful of the fact that we can use that bully pulpit. But we do want to be able to Use it to like, pursue things we consider worthwhile in the ecosystem.

Alex: Yeah, maybe to really tie these two together. I think for Paul and I, a really big indicator.

Paul: What.

Alex: One of the things that, at least for me more than anything, led to us giving this talk and writing this statement about where OpenSL was, is that we found that working with these new cell APIs, looking at things like Argon 2 or MLChem, it was becoming really unpleasant. If you had a log of all of our chats, you would see the profanity really went up. We were experiencing a lot of frustration and that’s like a pretty marked difference from what came before. That was just kind of not the emotional valence we had about adding new features before. And that was a big signal to us, like, hey, something has gone wrong. We need to take stock of what’s happening here.

Deirdre: And so that basically leads you to. The two of you, you’re generally on the same page about everything and it’s not that difficult for you to finally declare and come to the conclusion that you basically have to move away from OpenSSL for at least some things. For Python cryptography.

Paul: Yeah. So the core thesis of the statement we issued is that OpenSSL as a project has slowly diverged from paths that we find aesthetically appealing, technically acceptable, etc. To the point where we are now actively seeking mechanisms by which we can end our dependence.

Alex: Wow.

Paul: Now that is a difficult thing to do. There are a variety of reasons why, like compliance reasons. We have downstream consumers that care deeply about the support of OpenSSL. We have feature gaps based on, like, what, what the different forks support. There’s a bunch of things that make this challenging, but like while OpenSSL continues on its current trajectory, and to be clear, it is possible for them to course correct, but it is a difficult thing based on the fact that we’ve spent years advocating for it. But if they fail to course correct, if they fail to provide material improvement on the axes that we’ve defined, then we will be trying to, at the very least remove OpenSSL as our default wheel configuration. And potentially based on what Alex and I consider sustainable for ourselves in the long term, OpenSSL entirely.

Deirdre: Do you have any other whale configurations besides OpenSSL?

Alex: Right now we support build against Libre, SSL, boringSL or AWS LC, but you have to bring your own. We don’t ship pre built wheels for.

Deirdre: Those, but it’s not difficult for y’ all to just sort of start experiment with just building those in because you already have the work to link, to hook them in.

Paul: So we’re perfectly capable of doing it right now. The challenge is that like, there’s no concept of like wheel variants that would say like, oh, I want cryptography, but with a different backend. And even if there was that concept, which there’s actually some PEPs that might make that possible, even if that concept did exist, I think that’s not a thing I can expect a consumer to understand the consequences of. Alex and I believe pretty deeply in having this library be the sort of like drop in and just work. And it has secure defaults. And so I think like what we want to do is get to a point where we would just swap the default. Right.

Thomas: So everyone is just like, to a first approximation, everyone’s just going to use the default. And if the default has feature gaps, then you’re randomly going to blow things up for people.

Paul: Exactly. Right, yeah. So like for example, the current state of, the state of the world is that we Support and ship Argon 2ID and script, both of which are APIs that live natively inside of OpenSSL. Not all the forks support either of those. And that meant if we, if we shipped a wheel with those, we would lose the support for those algorithms immediately.

Thomas: But like, it’s more likely that you’ll. Like, this is kind of a normal engineering problem. Right. Like a flag day problem kind of deal. Right. It’s more likely that you’ll resolve that problem than that open SL3 is going to get rid of OSSL param.

Paul: I won’t try and speculate, but I. You might be right.

Alex: Yeah. I’m not a betting person, but like, it seems plausible.

Deirdre: It seems plausible.

Alex: Yeah.

Thomas: It seems like one of us could just sign up for polymarket and set this up.

Deirdre: That sounds like a long, long bet.

David: I mean, it seems like all you have to do is violate your first rule, which you also opened by saying you’ve already repeatedly violated, which is just like write a little bit of your own cryptography.

Paul: Then I got to get involved in fips more heavily and that’s already too much of my life.

Alex: Yeah. Like you end up with things like if you actually like go inside the fips.

Paul: Not yet. Hopefully sometime soon.

Alex: If and when NIST finishes their starts. I lost track of the status of their key derivation function work. But like in theory, maybe someday. But like Another example, like ED448 is, I think, not supported by any of the forks. It is in.

Deirdre: Do you need it?

Alex: Do we need it?

Paul: Whether or not it was wise, we did expose it at one point.

Deirdre: Oh Lord.

Alex: Yeah, and like this is one of those things where like we could, we can make. We in fact have made the list of like what are the things that we ship that like we think are not stupid? Like there’s some stuff that like we expose that like we don’t care. Like I don’t know sec. Tea whatever. Like there’s weird elliptic curves that like if we drop them we don’t think many people would care.

David: Right.

Alex: We would be okay with it. But there’s things like ED 448 or like different KDFs or like I think various like AEAD modes. Like we’ve got a list somewhere that like any one of them like you might look at and you might say ah, how likely do we think it is people use this? We can make an assessment of like how much breakage would it cost to do this? You know, what’s the likelihood? Are there standards that implement this? Like you know, If I search GitHub does it like seem like there’s some projects that use that? But for any of the given forks we’re looking at like a decently sized list of these like I don’t know, 10 or so algorithms and that’s like I don’t know, 10 times the breakage budget of the you know, mean of them. So it’s, it’s potentially non trivial. So like we would like we want to spend our breakage budget. Well is I think the way I would say it, like we’re prepared to break things when we think like the ecosystem our users get like substantial benefits but we have a lot more ability to make changes like avoiding open SSL if we reduce the level of breakage that we incur in doing so.

Deirdre: That does sound like you’re incentivized to implement some of this stuff in Rust to paper over the migration away from OpenSSL to another default backend. That really does and that does sound like what you’re the.

Paul: It’s not impossible but there are there, there are challenges there too because depending upon what we need to re implement like we don’t want our downstream consumers to be silently surprised when, when the thing they thought was fips no longer can be. Similarly like there are there are various like dependency requirements in, in the Rust cinematic universe of of cryptography that are somewhat challenging for us in some scenarios. And so like one of the things we mentioned in our, in our statement was Graviola, a pure Rust cryptography library that is interesting to us in the abstract. It’s nowhere near where we would need it Right now it’s like we’ve spoken briefly with the author and that’s not currently their focus for its adoption, but it’s something that we’re watching with interest as maybe a long term solution as we go forward. And when I say long term, I mean on the 5, 10, 15 year time horizon. Alex and I are genuinely thinking in the long term here because it’s also why we spoke up when we did, because we gave it several years, but also knowing that it will take us many years to migrate off. We want people to be aware of the problems rather than springing this on people at the last second.

Alex: Yeah, yeah. I mean, just in that space of like, you know, compliance questions and libraries. I do. I am very hopeful that Filippo Valsor’s work on FIP support for GO is going to prove as a model that is very valuable. So, like I would say in the, at least in the open source cryptography world, OpenSSL has been one of. It is like the most default choice, like, maybe even more of a default choice than like open ssl. Just as like open source cryptography is in general, like they had, you know, done the work to like build the like back when it was called like the FIPS canister. And I think Filippo has really demonstrated that it is possible if you are a diligent and knowledgeable cryptography Mainer maintainer to like take on this project is not, you know, impossible.

It’s not all consuming. It doesn’t even require changing your library in that like, disturbing of ways. Like go’s cryptography modules are still kind of like a model of like clarity.

Deirdre: Yeah.

Alex: So like, I am hopeful that serves as like a model that like organizations that have felt like I only have one choice will look and be like, oh, it is actually possible. There are other directions.

Deirdre: Yeah, I really hope the way that he was able to get that working for Golang will be a model for like a future possible Rust, like alternative. Like Rust does not have a standard library the way that Golang has a standard library, including all the cryptography that comes, including tos that comes with go. But I could totally see a library project in Rust trying to be validated as a FIPS module the same way that the Golang one did. It just would be its own project as opposed to a piece of the standard library of a language or something like that.

David: So I’ve always said instead of rolling your own crypto, you should build it for someone else and then charge them for the FIP certificate.

Deirdre: Yeah.

Alex: Cool.

Deirdre: Good luck and Godspeed. You’ve come 10 years, what’s another 10 to actually get away from the fundamental underlying foundation of your project?

Paul: I’m sure Alex and I will be complaining about this in our retirement next to each other in the old folks home.

Alex: Looking forward to it.

Deirdre: Aw that’s sweet.

Thomas: Thank you guys very much for introducing me to OSSL program.

Alex: We’re glad to help and thank you guys for having us.

Deirdre: Absolutely. Paul, Alex, thank you so much.