Quantum Willow with John Schanck and Samuel Jacques

Quantum Willow with John Schanck and Samuel Jacques

THE QUANTUM COMPUTERS ARE COMING…right? We got Samuel Jacques and John Schanck at short notice to answer that question plus a bunch of other about error correcting codes, logical qubits, T-gates, and more about Google’s new quantum computer Willow.

Links:


This rough transcript has not been edited and may have errors.

Deirdre: Hello and welcome to Security Cryptography Whatever. I’m Deirdre, and we have two special guests with us today and no other co host. We have Sam Jacques. How are you, Sam?

Sam: I’m great. How are you?

Deirdre: Good. A little cold, but good. And we also have John Schanck. Hi, John.

John: Hey. And Happy Friday the 13th.

Deirdre: I know.

John: Appropriate topic.

Deirdre: I know. The scary quantum computers are becoming manifest. We have two of mere handfuls of quantum cryptanalysts in the world to come talk about Google’s new quantum computing chip that they codenamed Willow. Sam is a researcher and associate professor at the University of Waterloo. I think, John, you did your PhD in quantum crypto dialysis at Waterloo. And I didn’t know that until I literally looked up your thesis.

John: I did, yeah. Sam and I were there as graduate students at the same time.

Deirdre: That’s great. So the headline is that there’s a new quantum computing chip and they. Google has also done their sycamore chip, which is a couple of years ago. And it was just sort of like, ah, we did, we did the random circuit sampling experiment and this is a thing that the, you know, your classical computer theoretically can’t quite do. And we did it faster and then a couple of days later people did it and simulated it on another data center. I think this seems to be a different kettle of fish. The headline seems to be they got quantum error correction working. Who wants to start with or quote working good, working well as opposed to just sort of like it sort of works, but not well enough.

Deirdre: Who wants to bite on what the hell quantum error correction is?

John: I can. Sam, do you want me to go? Yeah. Okay. This is a big topic. We could start with error correction. I think that might help. And then we’ll go to quantum.

Deirdre: Cool.

John: Yeah. Okay. So in error correction, if you have, you know, a bit that you want to encode, the simplest way you might do that is by running it down multiple times, the repetition code. So you want to write down a zero, get your paper out, write down zero, zero, zero. If somebody comes along and erases one of those and replaces it with a one, you can look at it again. Majority vote, go back to zero. And the great thing about the repetition code, as you add more, let’s call them physical bits, the things you actually write down. If you’re imagining that those bit flips happen independently with some probability P, if you start adding more bits, the chance that enough of them flip to change your majority, the result goes down exponentially.

John: It would go like roughly P to The D over two. Right. And so yeah, what they’ve shown here is with a different kind of error correcting code, not a. Well, they did look at repetition codes, but they also look at the surface code for quantum state. They showed that as you scale the distance, kind of the analog of the number of physical things that you’re writing down, as you scale that out, you get exponential error suppression.

Deirdre: Okay.

John: And that of the logical state.

Deirdre: And that is significant because anyone who’s been paying attention to quantum computation, especially people who care about in the context of cryptography and security, for all the stuff that we already have deployed, this is like the major threat vector for all of our asymmetric cryptography. And it’s always like, well, we don’t have enough physical qubits yet and we don’t have enough qubits, period. And theoretically we’ll get enough logical qubits to run these ATTCK algorithms with better error correction. And it seems like they’ve been working on trying to improve. They everybody working on quantum computation has been trying to improve error correction so that it actually gives. So if you actually add a bunch of physical qubits into a code, like the surface codes or the repetition codes, it actually adds up to a robust, useful, long lived enough logical qubit over which you can use to actually do quantum computation. What did they do? Did they just get nerd harder and get better at doing qubits?

Sam: Yeah, that seems to be it. I mean, I don’t know about John, but I think we like the level of engineering of actually making these physical things better is completely outside of my expertise. It’s sort of like once you get them good enough, then I’m like, oh, there’s this nice theory of error correcting codes.

John: Okay, yeah, I think we can go into a little more detail. So they do seem to have what is it like a factor 4 improvement in the qubit. The main metrics account for the physical qubits themselves. There’s like T1 and T2 times over. Is it the last machine? Chan? I’m kind of hoping that you have the numbers in your head, but. So they’ve got like a factor four improve. Yeah, that helps a lot. Right, because you’re starting at kind of with a better physical qubit.

John: It’s going to be easier to get a nice logical qubit out of that. Another thing I’m seeing a lot of in this paper is a lot of investment in the actual classical error correcting algorithms. Except they’ve got two that they talk about. There’s the sparse blossom algorithm and then another one that seems to. I don’t know, Sam, I didn’t really read too much about it, but it’s using some machine learning tricks.

Sam: Yeah. And I mean the surface code is kind of a funny thing because comparing to your majority vote, that’s an easy algorithm to implement. You literally add up the majority. Whereas when you look at the surface code and you try to say, well, what errors happened, that’s actually a non trivial computation to solve. And so they’ve actually put a lot of work into that. Side of it is saying what kind of classical algorithms we can do. And the performance of the code is effectively given then by this classical co processing. So they actually have.

Sam: And this is something that there’s been a lot of work on and I think they’ve got a nice parallelizable algorithm that can run very quickly. I think they had a neural network based thing and it kind of makes sense to sort of train it. Each of the qubits in this device is going to be unique and have unique errors. And so you’d want to train your error correction to say, well this qubit is more likely to do this kind of error. So let’s assume it did that rather than a different error.

Deirdre: That’s fascinating. That’s something that I wouldn’t have thought of because you know, you’re still modeling, you still have like a brain model of classical transistors. Even if they’re physical qubit, they’re not really transistors. But like there is some band of kind of normative expected behavior for a specific class of a thing that stores a bit and it’s not a bit, it’s a qubit. But the fact that you’re basically saying no, these physical qubits have enough variance that we’re going to train our classical decoder, our error decoder on all of them individually. They’re such special little snowflakes that we have to pay attention to them all that closely. I mean this is all algorithmic and all of that, but basically saying there’s such variance still in these things and there’s only like 101 physical qubits on this ship. So it makes sense and you can do that.

Deirdre: But that’s something that I guess I wouldn’t even thought of yet. Like we’re so far away from anything close to productionizing this sort of chip making process, these physical qubit making process, that this is like a thing you do have to do. So how does the threshold for error correction, I guess what is the threshold. Because in the paper and in all these blog posts, they basically say we have crossed this threshold of error correction to when you scale up, I guess, the diameter of the surface code versus actually how far down it drives down the error rate of the logical qubit that you are getting out of this mesh of physical qubits. So, John.

John: Yeah. The basic idea there is that as you scale out the distance of the code, you’re getting an improvement in the logical error rate and eventually through the threshold is saying if your physical qubits are good enough, then all you have to do is scale out the distance in order to get the state that could persist for an arbitrary amount of time. Yeah. And concretely, what they’ve done, I mean, their distance 7 code. So this is. I always picture it as a 7 by 7 block of qubit space. A little more complicated than that. There’s actually like 2 times 49 plus 1, whatever that is in there.

John: It’s kind of like two laminated grids of qubits, but so that one, it has qubit lifetimes that are like 200 microseconds comparing to the 60 to 80 somewhere in that range, microseconds for the physical qubits. So that’s an enormous improvement.

Deirdre: And so basically the threshold that we’re talking about is that the physical qubits are good enough on their own in terms of the rate of errors that they experience, that we can put them together in this diameter 5 or diameter 7 surface code, this array of approximately 7 by 7 qubits, physical qubits, such that we finally actually achieve better logical qubit error rates than any of the physical qubits on their own. And that’s kind of the threshold that we’ve been trying to achieve, or we quantum computer scientists have been trying to achieve.

John: That is a milestone. Yes. The threshold is a particular error rate for the physical qubits.

Deirdre: Yeah.

Sam: Yes.

Deirdre: Okay. I hand waved at this earlier, but basically this is the way we think that large, reliable quantum computers will become manifest such that we can actually do useful computation with them. Right. That this is kind of like now that we have physical qubits with decent error rates such that we can create logical qubits with these surface codes or other code error correcting codes. Then we can just tie them all together with a little bit of classical operational sidecar control flow or something. And then in a couple of years, not really, we’ll have quantum computers that do really useful quantum computation. But this is kind of the roadmap we were thinking would go. But now we’re finally there, right?

Sam: Yeah, yeah.

John: This is not a surprise. I don’t think. This is definitely something that’s been on the roadmap. And we knew it would come about this time, who had known that for a long time. There are some things that are missing from this if we want to get to doing computation. Okay. So this is really building a quantum memory. There is more involved in doing computation than just having a memory.

John: Right now, I believe they’re doing their decoding of the measurements you get out of the error correcting code. The kind of parity bits that you check. They’re doing that out of the fridge and they’re doing that offline. They’re trying to do it quick enough, they’re trying to do it fast enough. In the paper they do talk about this, that they do it fast enough that in principle they could feed those results back in to take action based on that, which you only need to do if you’re going to do computation. You don’t need that for memory. So they are succeeding in building a memory. But in order to start doing computation, you are going to have to feed the results back in and you’re going to have to perform physical gates based on those results.

John: And maybe Sam knows a lot more about that than I do.

Deirdre: So, Sam, that’s one of the questions I have, which is basically this is just like having logical qubits with decent error rates. I think it’s like 10 to the negative 3. I think is what they got. You can correct me, I think.

Sam: Yeah, got that.

Deirdre: The goal they would like to have is 10 to the negative 6 for like something that’s considered almost indefinite or whatever. But anyway, that’s not even including how the errors introduced by applying quantum gates and twiddling with the states of these logical qubits. Like how does. How does that interfere with actually accomplishing quantum computation by applying quantum circuits, depth and gates and all that stuff to these nice shiny new reliable logical qubits.

Sam: Yeah, I mean, the interesting thing about the surface code is that when you do gates on it, more or less, you’re doing almost the same operations you do to just hold the memory. Okay. So it’s kind of like you shut off one part of the surface code at some point and then you turn it back on and you do this in a very careful way. And this actually ends up doing kind of shuffles the data around and that kind of makes the computation work. So in that sense, really once you have this kind of, you’ve got this grid that can be a surface code if you make it big enough. Just the pattern of surface code you do on it does the gates. Now it gets a little bit more complicated and you do need this feedback eventually and then you at some point need to do. There are actual extra gates you would have to put in.

Sam: So there’s kind of the Hadamard gates which are probably more straightforward I think, for the surface code not demonstrated here. But the real tricky one is the so called T gate. And this is something where you need to build this whole extra factory. So you need this giant footprint where you. This is something that you do physically. But again your physical qubits, you’re not expecting to be very high quality. So if you want to make this T gate at a high enough quality that you can actually use it in your computation and use the billion of them that you would need to do something like factoring, you need to make it better. And so there’s this very complicated distillation process where you have these again, sort of this redundancy like the error correction in general.

Sam: You do a lot of these little T gates and you sort of mix them together in a way that kind of, you know, cleans the error off and makes one better one. And you chain this together. And so that would be a very nice milestone to see. And that is something that we are very far from because even kind of the smallest version of this factory is just. Is already quite a lot of logical, quite a lot of logical qubits or sort of the same footprint of logical qubits.

Deirdre: Like how many? I think this chip, I forget they have like 101 physical qubits that they need seven by seven to make one logical qubit. So that’s 49 modulo one or two. So like maybe they have two logical qubits or something?

Sam: No, they’ve got. No, they’ve got one logical qubit.

John: And I think they would actually argue that they don’t have any.

Sam: Well, yeah, I’m happy to fly to them and say it’s a logical cube.

Deirdre: Why would they argue they don’t have it?

John: Well, you don’t really have a logical qubit until you can do interesting quantum operations on it. And in particular you’re going to need two before you can do anything interesting. If you only have one logical qubit, you are doing classical computation with one bit.

Deirdre: You’re just kind of like oscillating in place and saying, oh, look how stable I am.

John: Yeah, so I think, and I’ve seen there’s some quote maybe in the paper on one of the news articles, they want to get to like a logical error rate of like 10 to the minus 6. And they want 2, 2 logic like qubits in there have that error rate before they claim a logical qubit.

Deirdre: So if they just took two of these chips, you know, you clone all the, you know, the super connectivity refrigeration and all the classical error, you know, decoding controllers and all this sort of stuff. Can I just like hook them up? Can I just like USB C between these, these two quantum chips? Like I have two now, right?

Sam: No.

John: If they’re not directly in contact with each other, I mean, so there’s some operations that need to be performed between neighboring pairs of physical qubits. And if you have these patches of surface code spread out in different refrigerators on other sides of the room, you’re going to need to use some quantum communication in order to inject those gates over that distance. So that’s quite a lot of additional work. We have seen people doing quantum communication between fridges with superconducting qubits. That’s something that people are working on. And it’s one way to scale. When your chip gets big enough that it won’t fit in a fridge anymore. Probably the more direct way to scale is build bigger fridges for a long time first.

Deirdre: But like how big are these fridges? Like they’re, they’re big. They’re bigger than the one in my kitchen. But like, how big is it? Like a container, Like a container ship size? Container size or.

John: No, no, the ones that we had at the super quantum computing were not much bigger than your refrigerator. Maybe they were a little taller. These ones that Google’s using aren’t going to be substantially bigger. I don’t know what the, what the, you know.

Deirdre: Sure.

John: I don’t know the volume of it.

Deirdre: Yeah, just a rough.

John: Oh, I was just going to say you have to imagine that the chip is, you know, a few square centimeters and that’s, you know, you have a lot of machinery there to cool a volume that’s a little bit bigger than that and you need to run wires into it and, and all that. But the fridge, at least the part that’s cold, is very small. Actually it’s a pretty small volume. But then there’s a huge apparatus around it in order to do the, what’s it called, the, the evaporative cooling of helium. I forget. In order to make something that cold. And I mean like it’s, it’s cold, it’s it’s like. Was it Millikelvin? Millikelvin, 20 Millikelvin.

John: We’re talking, like, if you were in deep space so that your eye could not see stars. Wow. It’s colder than that.

Deirdre: Wow.

John: Right? That’s. That’s cool.

Deirdre: We can get back to, like, who’s gonna. Who’s gonna be using. I’m not gonna have a quantum computer in my house anytime soon, even if I. Even if I could afford it, because it’s like, this is. At least all the things that Google’s come out with have been superconducting physical qubits. There’s other approaches to the technology, but it just seems that a super cool millikelvin stable environment is required to even get to these results anyway. But maybe I’ll have a very cold server wreck in my house in 2050 or something like that.

John: Maybe. I mean, some of us are still skeptical that these things are useful even if you can build them. I mean, for surveillance. Yes, that’s clear. I don’t know how many numbers you’re trying to factor in your kitchen. I don’t particularly want to. Seems like a big liability, honestly.

Deirdre: Yeah. So can we talk a little bit, if anyone wants to talk a little bit more about the classical decoders? Because they have a couple of algorithms. But, like, this is one of those things that I find interesting is that, yes, we’ve got these quantum physical qubits, and we’re creating these logical qubits from those, but there’s a lot of classical computing going on to make all of this work. How does that even work? Like, why are we allowed, allowed, quote, unquote, to do a bunch of stuff classically? But we’re still getting, quote, the benefit of quantum computation, I guess.

John: Should we talk about the repetition code to start? Why not? I think this is something that’s easier to understand. We might lose people on quantum error correction, but they do an experiment here in this paper using the repetition code. And fundamentally, you’re doing the same operations and you’re doing computations that are similar, at least. So earlier we were talking about this repetition code. You write down a few physical bits. Like, if you want to encode zero, you write down zero, zero, zero or five zeros or seven zeros, whatever. If you wanted to correct it, you could go and look at the actual values that are there. Like, if an error is introduced somewhere in your string of zeros and it turns to A1, you could go and fix that.

John: But you could also do it indirectly. You could do it without looking at that string of zeros. And that’s really important for quantum computation because we can’t go and look at the state without eliminating the nice quantum correlations that we have that are doing the interesting processing for us. Right. So we want to indirectly measure the state of the system to kind of encourage that system to go into a nice state that’s kind of energetically separated from other logical states. Meaning you would have to pump in a fair amount of energy into the system. It would have to come from somewhere in order to transition from one to the other. So to do this on top of your string of zeros, kind of in a pyramid shape, just one level, just imagine that you have some bits that are going to check the parity of the ones right below them.

John: So just imagine you have three zeros, and then right on top of those are two more bits kind of sitting in between the pairs of zeros. And what you’re going to do is periodically measure the parity of the two bits below either one of those parity bits and get that value out. So you’ll actually look at the parity. But the computation that you do will not. It has to evolve unitarily in time. It can’t actually look at the data bits there, but we’re going to somehow get the parity of those two bits that are below our measurement qubit, our measurement bit. If you do that, those two parity bits are enough to tell you where a single bit flip error is. You can work out the different combinations and see how I can correct any one of this.

John: Okay, so exactly what they do in this experiment, you can go and look. I don’t know if I can maybe find the figure at least to tell people where to find it. You can get the version from August. You know this paper did come out in August, right?

Deirdre: Yeah, it was funny. I went to the Nature and it’s like, ah, this is. You have to pay $20 to read this article. I’m like, no, it’s on archive.

John: Okay, so if people are curious, I can go and look in the supplemental materials of figure S7. And you can kind of see this zigzag shape. And this zigzag shape is exactly what I’m talking about with kind of the parity bits sitting on top of the data bits. So, yeah, exactly what you do is you’re doing these parity measurements without really looking at the value of the data bits. And you pull those out and feed them to your classical decoder. So it’s going to look at all those parity measurements and it’s going to figure out where the errors are. And if you want, you could go and correct them. You actually don’t have to.

John: You could just keep track of what the actual state of the system is in software. But you just have to periodically go and look at the parity bits and look at how they change over time. So that’s exactly what they do in this paper. They’ve got their grid of qubits and they pick some of them to be data bits and they pick some of them to be parity bits. And there’s this kind of zigzag shape that traces out over the surface and they encode some bit string in the data bits and then they measure the parity over time and see that they can keep, I guess they encode 0 or 1, but they can keep it in that state for long periods of time. And this is honestly maybe the headline result for me, because people look, a lot of people, especially in our field, they’re like, well, how big of a number can I factor today? Like, what’s the progress? And if you look at the change in how they’ve been able to simulate these repetition codes from 2001 to today, they go from being able to maintain a regular bit, not a cube bit, just a regular bit for 10 seconds in 2001 to five hours today, enormous change. And fundamentally what you’re doing if you’re stabilizing bits or stabilizing qubits is not that different. So to go from the repetition code to the surface code, you are doing not just parity measurements like we were talking about, you do a slightly more complicated circuit.

John: It involves not just pairs of qubits that are near each other, but there’s four qubits. And, you know, so the number of physical gates that you have to do is a small constant factor larger than what you have to do for the repetition code. But it’s the same kinds of operations. There’s no real fundamental difference. And if you can make the repetition code work, you can make the surface code work, the repetition code, you can get higher distances on your grid of qubits, right? They’re able to go to distance 29 and they’re only able to go to distance 7. So you can get these enormous error suppressions because they’re able to go to those high distances. They don’t actually see any bit flips in their five hour experiment. Wow.

John: So if you imagine that you’re able to make a distance 27 surface code, it’s going to be the exact same hardware. You just need more qubits. Right. But you could get similar error expression and keep a qubit alive for five hours. So that’s the upshot for me. But when I look at it I’m like, what metric has improved? And I need something fine grained enough to see the change over time. We can’t look at factoring, that’s ridiculous. We’re not going to see any progress on factoring until it’s way too late.

John: It’s going to be like in 2032 and it’s like, oh, it’s after a 10 bit number and then a couple of years later it’s a thousand bit number and you’re like, okay, that’s just a step function. I want to see continuous change over time. I want to see how these things are improving. And you have to look at some of the other experiments that they’re doing and there it’s really clear that this is substantial progress.

Deirdre: Hell yeah. This makes me pivot over to Sam’s chart that like every time that we have a new result in quantum computing we’re like okay, where does this lie on Sam’s chart of qubits on the x axis and like oh gosh, what’s on the Y axis? Just like how many, how much threat it is to bits of security?

Sam: Well, it’s error, error rate versus the error rate.

Deirdre: I’m sorry, I’m sorry.

Sam: Yeah. And I mean this paper is kind of shows the limitations of this because there’s so many metrics you can use to think about a quantum computer, John saying basically look for one that you can kind of sort of have a nice fine grained view over time. And qubit count to error rate is sort of a bit crude. And you can kind of see that when I tried to draw this chart and I put Google’s result there, which I feel like is one of the best quantum computers that there is right now. Probably the best. It’s kind of in the interior of this sort of hall of current quantum computers. Like it looks like other ones are doing better because if you just kind of take the maximum error rate, it’s not the best. And yeah, I think this is kind of the point of this is to sort of illustrate, okay, we’ve got this little cluster over here at the bottom left and you know, factoring RSA is on the top right, so there’s a big distance between them.

Sam: Right. I don’t want it to be, oh, we’ve moved one millimeter. Therefore you kind of have to be a lot more careful and look at these kind of what are these consistent experiments and things. And you also run into this issue that if we all started listening to John, which we should, and say here’s how these repetition codes have been improving over time, then every quantum startup is going to kind of over engineer repetition codes. Right. One of the things in the field is that I think since maybe five years ago, I feel like there’s been a change in the somewhat public ish discourse around quantum computing where people have started saying oh well, this many physical qubits, but what about logical qubits? Do we have any logical qubits? And now this has worked its way through the quantum computing companies. So Quantinium had this announcement, oh, we have 50 logical qubits. And you say well what? And they, in a misleading technical sense, yes they do.

Sam: They have an error correcting code that encodes 50 qubits into 52 physical qubits. But they’re very, very different than the kind of logical qubit, like completely different than the kind that Google made and completely different than what would actually allow us to factor anything. So yeah, bit of a rant on that.

Deirdre: But no, I love it because like you know, if you’re not, if you don’t know what you’re looking at and like what, you have to dive in deep because if you just look at the headline from X, y and Z Co. They’re like, yeah, we have 52 logical qubits and they like yeah, well they’re shit, they’re quote shit. Logical qubits, you can’t do anything theoretically useful with them. Whereas this one is like we have basically one logical qubit. And yeah, you can’t actually do any quantum gates with it. But like it’s probably the best one we’ve ever seen ever because of all this other work into the, you know, the error correction and all this other stuff. So I guess like yes, you know, trying to like accurately reflect your, the words that go along with your updated chart do a lot better to explain like the, you know, the salience of this sort, the results in this work. But like throwing it in there you’re like ah, well we’ve got this really very reliable logical qubit, but we still can only factor 35.

Deirdre: So like meh, call me when we can factor a 1024 bit RSA modulus or whatever. And as John said, it’s basically like we’re probably going to be around where we’re at until it’s a major Step function. And we are factoring a 1024 bit RSA modulus. So it’s paying attention to those metrics that are very kind of niche. And you have to know what the hell is going on under the hood to track it. Because tracking, oh, just call me when you can factor a 1024 bit RSA modulus is like, yeah, it’ll be too late by the time we call you. So deploy your post quantum cryptography now.

John: Absolutely. I mean, I think that this really is the key point. Like, we have to pick the right metrics to look at these things. I see a lot of people asking like, will we see exponential growth in quantum computing is really by what metric? Because exponential growth, I mean, a lot of people don’t understand compound interest. Exponential growth is not hard. You have an immature technology and you get a bunch of engineers in the room and you tell them, can you make a 10% improvement over this from last year? And you get exponential growth as long as there’s enough Runway there, enough interesting parts of the technology to improve upon. And in quantum computing, you have so many different new technologies where improvements in one of them actually lead to full system performance improvements like improvements in the cooling technologies let you just be more wasteful. Put more of your classical error correction on your noisy classical chips that are spewing thermal radiation rewire.

John: Put it closer to the chip because you can cool the thing better and.

Deirdre: That reduces latency and that makes it better feedback even tighter.

John: Exactly. And you can talk about that for so many different aspects of the technology, like the physical manufacturing of the qubits. Getting better physical qubits is going to mean that you need smaller codes to run your algorithms. Designing better codes use fewer physical qubits in order to run your algorithms. When we start talking about actually doing computation, we’re going to be doing compilers to try and lay out, because we are talking like these 2D grids of things. You have to lay out the bits on the surface in order to minimize data movement costs and all that. That’s another thing. We’re just going to have this whole period of time where people can optimize compilers and get huge improvements in the cost to run algorithms.

John: And then at the same time you have the cost of the algorithms themselves are falling. There’s a new paper on reducing the number of qubits that are required for, as I knew this year, reducing the number of qubits that are required for factoring down to n over 2 from 2 n factor 4 improvement. Okay, I haven’t seen the full system analysis of what that means for number of physical qubits that you need. But there’s a lot of avenues for improvement and a lot of people that are interested in it, a lot of money and it’s really hard to see how you would not make exponential growth those little 10% improvements here and there that compound into exponential growth.

Deirdre: Yeah. And kind of going back to the gates where we discussed a little bit how the, you know, the surface codes and the repetition codes, the progress there implied that like this kind of will carry over to both the gates themselves to actually being, being able to do as well as they’re able to do with the, with the error correction and these different codes kind of is like, ah, that means like, because we can do that, we will also be able to do like actually applying quantum gates a little bit better. And then there’s always some error that’s introduced when you’re applying quantum gates. Right. Does this give you any kind of intuition about, okay, so we have these good error correcting codes and then that may affect how much error actually applying gates will actually introduce or is it just sort of like, yeah, we can’t say at this moment. And I’m thinking specifically about applications of Shor’s algorithm or anything like that, which is like all factoring. And if this is sort of like a big shrug, we have no idea. Like we could say that too.

John: I don’t really have intuition there, Sam. I’m also seeing a blank stare.

Sam: Yeah. I mean again, like, except for a couple special gates just having the logical qubit, kind of is the gate all right? Well, the T gate is also something that we’re not even going to see for a while because even to do this whole distillation, there’s kind of a minimum to see anything. And that is another one of those things where it might be kind of a step function where no one has ever demonstrated a distilled T gate in a surface code. And then all of a sudden, oh yeah, someone did. And then five years later we’ve distilled enough to do something interesting.

Deirdre: Can you say a little bit more about that? And is it just a blocker having a decent fidelity logical qubit which is kind of blocked on error correcting codes or if there’s something interesting about T gates that makes it difficult?

Sam: Yeah. One of the interesting things about these codes and is kind of different from classical computing is that if I use an error correcting code to send data over WI fi or something, the router gets it and then decodes it and then operates on unencoded data. Your quantum computer can’t do that because as soon as it decodes it, like if you look at these physical qubits and these, you know, their coherence times are measured in, I think, milliseconds or microseconds.

John: Microseconds.

Sam: You really can’t do any computation. You have to keep it encoded all the time. And you’ve done all this work to make this code, to make the quantum state kind of stable so nothing can change it. And then you have to go and say, well, I actually want to change it myself sometimes. And so it becomes that much harder to do that and. Totally lost my train of thought there.

John: Have we talked about what the T gates are doing and why you need them?

Deirdre: I think we’ve mentioned them and I know a little bit about Hardiman gates and that’s it. And the fact that in quantum computing we’re all used to kind of Boolean logic ands and ors and nans and things like that. There’s a whole zoo of gates that are specific to quantum computing. And like. Yeah, tell me more about T gates.

John: Yeah, I mean, probably not worth like actually describing mathematically what’s going on here, but like hand waves. Yeah, the, the thing is, like, if you only do dates that you can kind of. You can imagine that you can apply some gates by just touching every one of the bits in your surface, all right, you do some physical transformation of each one of those qubits and you get some transformation of the corresponding logical state. For instance, if you want to just do a bit flip, it’s really easy. You do single bit flips all the way across from one side of the surface to the other. That’ll do. You have to connect some. One edge to another.

John: But actually you’re only going to get classical computation if you are essentially classical computation. If you just do these things where you can kind of just touch each bit and do something to each one. In order to get full quantum computation, you do need some additional resource state and that state gets consumed in order to enact some transformation of the state that you cannot reach by just fiddling with the individual bits in your code. So you can decompose any circuit into one that does the little individual bit twiddling ones and then some of these special ones that consume a resource state. And so if you’re going to do surface code computation, you need these magic states, is what they call them, the T dates. You need the magic state, the T state, and you need to distill those from physical T states that you can prepare just by actually doing rotations of physical gates. But you get noisy ones and you need to get a high quality one. This is a circuit that you apply to a Collection of like 15 physical T states and then you’re able to put those together and you get a logical T state encoded in the surface code and then you’re able to do some non trivial, like genuinely quantum transformation of the encoded state.

John: So you need these distilleries kind of like alongside your circuit and it’s about the same size as the circuit that you’re going to perform. So it’s kind of like at least my mental model. I don’t know if maybe somebody can correct me on the exact numbers here. But you basically need, if you’re going to do a circuit with N bits, you’re going to need a similar size, area and time of running these distilleries and then feeding those states into the computation that you’re doing. So it’s going to be a little while. I mean, I think the actual distilleries are not that much. They’re not bigger than individual logical qubits. You’re just going to run them in parallel with your logical qubit.

John: Sam’s maybe going to correct me there.

Sam: Yeah. So if you want, kind of the bigger you make it, the better quality you get out of it. And like at the scale of Shor’s algorithm or something, the footprint is about 120 logical qubits, gives you one of these factories.

John: Okay, so it’s like constant factor.

Sam: It’s not that constant factor. Yeah.

Deirdre: This is wild to me because like, if you, if you’re looking at the willow paper, the, the willow work, they have like, they’ve got the data qubits, they’ve got the, you know, the parity qubits, they’ve got like some other qubits in here. These are different physical qubits that are being used for different things. And then we’re talking about actually instantiating these gates and you have to have like a sidecar of other qubits that does it. Like, this is so funny to me because, like, if you think about general digital computation, like a gate is a gaze, not even a gate. A transistor is a transistor transistor. You use it for like, maybe you’ve got like some other funny little things. If you got like an extremely heterogeneous bespoke architecture or something like that. But like, for most people who took an architecture class in school.

Deirdre: It’s just like, you know, a transistor is a transistor, a transistor. And like in this world it’s like not like that at all. And so we mentioned earlier compilers, you’re going to be able to do a little tweak and you’re going to get some crazy optimization out of your quantum computation and there’s going to be a bespoke compiler for every bespoke quantum chip for decades. I’m going to bet you. We’re not going to have a consistent instruction set architecture for quantum computation for a very long time, I think, because there’s just so much stuff going on, there’s so much bespoke, almost bespoke hardware. Even though like. And maybe you can tell me differently if like these different physical qubits you could decide like, ah, we’ve got 101 on this chip and we’re using them in such and such a way, such that we can do this, you know, the surface code, but we can just decide to use them differently for whatever. Is that a thing?

John: That’s absolutely the case. They’re the exact same physical qubits. You just use some of them to produce these resource states, these magic states, and some of them to be logical qubits. Okay. That is all that you need. You need the resource states and you need the logical qubits and you’re off to the races. There’s not a bunch of other special types of mysterious qubits that’ll show up when we start talking about bigger algorithms or anything. This is it.

John: And as far as architecture and instruction set, you’re doing physical qubits in the plane, 2D nearest neighbor connectivity, and your instruction set is the service code. It tells you how to do the basic operations on it.

Deirdre: Okay.

John: I guess you could probably cook up some operations using different resource states and things like that that are a little bit different. I don’t know what people are looking at, but yeah, we basically know what that’s going to look like.

Deirdre: That’s cool.

Sam: Yeah, I mean that’s basically like these little things about sometimes making the distilleries a little bit better or something like that, you know, that kind of improvement exists. And I mean also as far as kind of a compiler, you know, this is something where if we’re absolutely limited by the size of the device, and let’s say you wanted to break cryptography, it would be in your best interest to just hire a couple of smart people to like stop automatically compiling. And just by hand you know, cram the circuit, you know, design the circuit to be as small as possible. So I think there will also be a lot of this kind of by hand, you know, vary component by component. Like when you’re saying the qubits are these special snowflakes, I’m not even. Probably transistors are too. It’s just, yeah, it’s. It’s not worth anyone’s time or money to care.

Sam: Whereas if you’ve only got 105, you know.

Deirdre: Right.

Sam: You treat them all with a lot of reference, I guess.

Deirdre: Fair enough. Just before we dive off in a different direction, can you kind of give an overview of like, okay, say we’ve got these really great high fidelity long 10 to the negative 6 rate of errors logical qubits. And we’ve talked about T gates and Hardiman gates and distilleries and factories and things like that. So when we’re actually running a quantum algorithm on a chip full of nice, really reliable logical qubits, what are we doing? Because to a general digital computer sort of person, they’re like, yeah, I’ve got all these Boolean logics and I have adders and multiplexers and I’ve got all this, all of this stuff built upon itself in terms of Boolean logic. And I give it some, I give it some instruction set commands, I tell it to add and store and load and things like that. What is happening when I’m running a quantum algorithm on a really nice quantum computer that we will totally have available to us in like five years? Right.

Sam: So it’ll be a little bit different because in the classical computer case, you’re kind of pushing your data through the gates, right? You’ve got these gates as this sort of static object that kind of does the computation. Whereas in this surface code quantum computer, where you imagine you actually have these logical qubits, the qubits themselves are just kind of sitting there. And when you want to do some operation on it, it’s this process that you would enact on them like with the surface or with the superconductors. You’ve got this microwave pulse. So what you’d imagine is kind of this very carefully synchronized set of instructions. What pulse is going to which qubit at which time? And this is kind of entangling them and modifying their state very carefully over time and just sort of changing that pattern on the same set of qubits is what ultimately makes these gates happen. And then kind of going one step higher till you think about like an adder or something. You’re putting something together that basically is sort of like a circuit built out of these gates.

Sam: And again, because you’re. You don’t have to build the gates. You’re sort of doing them dynamically. You don’t need to have, you know, a fundamental unit which is an adder that you use every time. You don’t need to think about more or less. You don’t need to think, oh, I’ve got a 32 bit architecture or something. You really can kind of build it on the fly to whatever works to make the circuit as efficient as you want. And so this is kind of something that we can also think about and work on now where we can say, well, we’ve got this layout.

Sam: How would we actually lay out this pattern of operations to efficiently? Kind of. You can even draw sort of a 3D picture of this thing and sort of try to push it together.

Deirdre: Is that sort of like if you’re going from left to right, that’s sort of like the gates you’re applying over time. Time from left to right in the. Sort of like in the. Okay, so I started, you know, T0, this is my state. Maybe it’s just literally empty. And you do some quantum gates, and now you’re at T1 and it has a different state, and you apply different quantum gates. You’re at T2 and so on and so on. That’s the kind of 3D.

Sam: Yeah. By convention, time goes up instead of left to right, but exactly the same.

Deirdre: Awesome.

Sam: Yeah.

John: I just want to add one thing to that. I mean, the. The qubits, the quantum chip itself, that’s a memory, and all of the computation is happening in the error correction and the classical signals that you’re sending down to the chip.

Deirdre: That’s so cool.

John: You’re changing how that system evolves in time in order to enact a transformation of the quantum state. So whenever you look at a quantum circuit, you should actually be thinking, this is like what the classical computer does with changing microwave signals and changing how it’s going to interpret measurement results and things like that. It’s not forcing a bet through an adder or.

Deirdre: It’s so cool because once I. I think it was with Sycamore, I was like, really, like, I want to really actually grok this adversary to all the cryptography that I love. And, like, it’s. It’s such a cool mind warp of, like, if you learn how to compute the digital way, this is just such. It’s so different, but it’s also, like, beautiful and fascinating and like, everything goes into 3D. Okay, Sam, what’s your hot take for the next 10 years of quantum computing and how it applies to all the classical asymmetric cryptography we have deployed?

Sam: My hot take on the 10 years is nothing will happen to our asymmetric cryptography. But kind of, if the progress, if we have kind of the progress we’ve had now, you know, we have Google’s chip and maybe we have two logical qubits or what they would actually call a logical qubit. Next year, maybe four, the year after that. In 10 years we still don’t have enough to factor anything. Right. And the other kind of worrying thing about this for the field of quantum computing is that if you’re in the logical layer and you’ve only got a few dozen logical qubits, that’s not doing anything that interesting. I mean, that’s a level. Classical computers can actually simulate logical qubits.

Sam: So if you have up to say, 40, you’re not doing anything a classical computer couldn’t do anyway. And with all the error correction, you’re probably doing it even worse than the classical computer. And even if you’re above 40, it’s still sort of, is there an actual algorithm that’s going to outperform the classical computer at that? You know, Maybe you need 100 or more even for things that aren’t factoring, like different kinds of applications for quantum computers.

Deirdre: Yeah.

Sam: So it’s possible that kind of in the next 10 years there will be nothing useful from error corrected quantum computers. And so kind of what might save things for the field of quantum computing is things that don’t actually need all that error correction. Like you found something that can maybe do something useful. Definitely not factoring. Definitely no threat to cryptography. To me, the leading candidate is chemistry applications. But you can do that without this full layer of error correction.

Deirdre: Is that the stuff that’s like, I think on your nice chart it’s like literally how do you get ammonia from the ether or something like that? What is that thing?

Sam: Oh, so yeah, this has been kind of the sales pitch for the field is, oh, well, what can we do with chemistry? Well, maybe we can come up with a better nitrogen fixation process. As though this will somehow resolve our unsustainable farming practices. That was the magic bullet we needed, was just better nitrogen.

Deirdre: I mean, it was a significant thing when we were able to do that better, I don’t know, 100 years ago or something like that.

Sam: Doing it at all was a very big shift to agriculture. But I think there’s a lot of maybe I shouldn’t be ranting about agricultural practices. Quantum computing. So this is one application, but even that is kind of well above that’s true. That’s an error corrected kind of algorithm. As you can kind of see on this chart, it’s not that much earlier than rsa. So I am sort of expecting this rapid transition again, this sort of step function thing where we’re not having any useful logical layer algorithms. And then all of a sudden RSA is broken and so that earlier step of you’re doing some sort of chemistry without the logical error correction.

Sam: So I put that on my chart as this sort of green region up at the top. And the best estimates I could find were that you actually need much higher quality qubits than we have now. So it’ll be kind of an interesting question over the next 10 years of whether that improves at all, whether we actually can do something with the devices.

Deirdre: Right. Okay, John, what’s your hot take for the next 10 years of quantum computing?

John: I’ve been working in this field for a little bit now. I think I started in like 2011 and since about 2014, I think I’ve been consistent in saying that I think we’ll have cryptanalytically interesting quantum computers in the mid-2030s. I’m not changing that. Cool. I think that we will be surprised by the progress. Now granted, I am a quantum computing skeptic. I really don’t think that these machines are useful. I don’t buy many of the applications at all.

John: And honestly, my goal is to defeat the quantum computer. I’m not the quantum computer’s friend. I’m not promoting it. I want to beat it with the better cryptography that we can deploy today. And if I can say some things that discourage people from working in the field, great. We’re not going to get these machines. They’re not going to have to worry about the surveillance implications. So maybe don’t take what I say too seriously, but I really think that we’re going to keep seeing progress like we’ve seen recently with this Willow chip.

John: And we’re going to get to a point where it’s really undeniable, except for a few fringe skeptics holdouts, that you can build these machines and that it was necessary to transition to post quantum cryptography. We saw something similar just with adoption of quantum mechanics and understanding that, yeah, this thing really works. There’s a long history of people doing loophole free bell inequality violations. They’re just like, oh yeah. But like Maybe there’s like maybe you missed something. Maybe you got move them farther apart before you do the bell violation test. Like you got to do all this stuff. I don’t really believe it.

John: These skeptics hold out for a long time. It’s great. It’s healthy for the scientific community to have that skepticism. But I think we’ll really quickly get to a point where you have to be concerned. Like yes, you can dault these machines. They are with threats.

Deirdre: Yeah. It’s a paradigm shift at first. You fight it and then that’s the new world. And everyone agrees like, yep, okay. It’s real.

Awesome. Sam. John, thank you. I was so happy to get two of the handful of the world’s quantum cryptanalysts on the horn at such short notice. Thank you so much and thank you for answering my silly questions.

Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek and David Adrian. Our editor is Netty Smith. You can find the podcast online @scwpod and the hosts online at @durumcrustulum, @tqbf, and @davidcadrian. You can buy merchandise at https://merch.securitycryptographywhatever.com. If you like the pod, give us a 5 star review wherever you rate your favorite podcasts. Thank you for listening!