Many years from today, scientists will be able to use fault-tolerant quantum computers for large-scale computations with applications across science and industry. These quantum computers will be much bigger than today, consisting of millions of coherent quantum bits, or qubits. But there’s a catch — these basic building blocks must be good enough or the systems will be overrun with errors.
Currently, the error rates of the qubits on our 3rd generation Sycamore processor are typically between 1 in 10,000 to 1 in 100. Through our work and that of others, we understand that developing large-scale quantum computers will require far lower error rates. We will need rates in the range of 1 in 109 to 1 in 106 to run quantum circuits that can solve industrially relevant problems.
So how do we get there, knowing that squeezing three to six orders of magnitude of better performance from our current physical qubits is unlikely? Our team has created a roadmap that has directed our research for the last several years, improving the performance of our quantum computers in gradual steps toward a fault-tolerant quantum computer.
Roadmap for building a useful error-corrected quantum computer with key milestones. We are currently building one logical qubit that we will scale in the future.
Today, in “Suppressing Quantum Errors by Scaling a Surface Code Logical Qubit”, published in Nature, we are announcing that we have reached the second milestone on our roadmap. Our experimental results demonstrate a prototype of the basic unit of an error-corrected quantum computer known as a logical qubit, with performance nearing the regime that enables scalable fault-tolerant quantum computing.
From physical qubits to logical qubits
Quantum error correction (QEC) represents a significant shift from today’s quantum computing, where each physical qubit on the processor acts as a unit of computation. It provides the recipe to reach low errors by trading many good qubits for an excellent one: information is encoded across several physical qubits to construct a single logical qubit that is more resilient and capable of running large-scale quantum algorithms. Under the right conditions, the more physical qubits used to build a logical qubit, the better that logical qubit becomes.
However, this will not work if the added errors from each additional physical qubit outweigh the benefits of QEC. Until now, the high physical error rates have always won out.
To that end, we use a particular error-correcting code called a surface code and show for the first time that increasing the size of the code decreases the error rate of the logical qubit. A first-ever for any quantum computing platform, this was achieved by painstakingly mitigating many error sources as we scaled from 17 to 49 physical qubits. This work is evidence that with enough care, we can produce the logical qubits necessary for a large-scale error-corrected quantum computer.
Quantum error correction with surface codes
How does an error-correcting code protect information? Take a simple example from classical communication: Bob wants to send Alice a single bit that reads “1” across a noisy communication channel. Recognizing that the message is lost if the bit flips to “0”, Bob instead sends three bits: “111”. If one erroneously flips, Alice could take a majority vote (a simple error-correcting code) of all the received bits and still understand the intended message. Repeating the information more than three times — increasing the “size” of the code — would enable the code to tolerate more individual errors.
Many physical qubits on a quantum processor acting as one logical qubit in an error-correcting code called a surface code.
A surface code takes this principle and imagines a practical quantum implementation. It has to satisfy two additional constraints. First, the surface code must be able to correct not just bit flips, taking a qubit from |0⟩ to |1⟩, but also phase flips. This error is unique to quantum states and transforms a qubit in a superposition state, for example from “|0⟩ + |1⟩” to “|0⟩ – |1⟩”. Second, checking the qubits’ states would destroy their superpositions, so one needs a way of detecting errors without measuring the states directly.
To address these constraints, we arrange two types of qubits on a checkerboard. “Data” qubits on the vertices make up the logical qubit, while “measure” qubits at the center of each square are used for so-called “stabilizer measurements.” These measurements tell us whether the qubits are all the same, as desired, or different, signaling that an error occurred, without actually revealing the value of the individual data qubits.
We tile two types of stabilizer measurements in a checkerboard pattern to protect the logical data from bit- and phase-flips. If some of the stabilizer measurements register an error, then correlations in the stabilizer measurements are used to identify which error(s) occurred and where.
Surface-code QEC. Data qubits (yellow) are at the vertices of a checkerboard. Measure qubits at the center of each square are used for stabilizer measurements (blue squares). Dark blue squares check for bit-flip errors, while light blue squares check for phase-flip errors. Left: A phase-flip error. The two nearest light blue stabilizer measurements register the error (light red). Right: A bit-flip error. The two nearest dark blue stabilizer measurements register the error (dark red).
Just as Bob’s message to Alice in the example above became more robust against errors with increasing code size, a larger surface code better protects the logical information it contains. The surface code can withstand a number of bit- and phase-flip errors each equal to less than half the distance, where the distance is the number of data qubits that span the surface code in either dimension.
But here’s the problem: every individual physical qubit is prone to errors, so the more qubits in a code, the more opportunity for errors. We want the higher protection offered by QEC to outweigh the increased opportunities for errors as we increase the number of qubits. For this to happen, the physical qubits must have errors below the so-called “fault-tolerant threshold.” For the surface code, this threshold is quite low. So low that it hasn’t been experimentally feasible until recently. We are now on the precipice of reaching this coveted regime.
Making and controlling high-quality physical qubits
Entering the regime where QEC improves with scale required improving every aspect of our quantum computers, from nanofabrication of the physical qubits to the optimized control of the full quantum system. These experiments ran on a state-of-the-art 3rd generation Sycamore processor architecture optimized for QEC using the surface code with improvements across the board:
Increased qubit relaxation and dephasing lifetimes through an improved fabrication process and environmental noise reduction near the quantum processor. Lowered cross-talk between all physical qubits during parallel operation by optimizing quantum processor circuit design and nanofabrication. Reduced drift and improved qubit control fidelity through upgraded custom electronics. Implemented faster and higher-fidelity readout and reset operations compared with previous generations of the Sycamore processor. Reduced calibration errors by extensively modeling the full quantum system and employing better system-optimization algorithms. Developed context-aware and fully parallel calibrations to minimize drift and optimize control parameters for QEC circuits. Enhanced dynamical decoupling protocols to protect physical qubits from noise and cross-talk during idling operations.
Running surface code circuits
With these upgrades in place, we ran experiments to compare the ratio (𝚲3,5) between the logical error rate of a distance-3 surface code (ε3) with 17 qubits to that of a distance-5 surface code (ε5) with 49 qubits — 𝚲3,5 = ε3 / ε5.
Comparison of logical fidelity (defined as 1-ε) between distance-3 (d=3) and distance-5 (d=5) surface codes. The distance-5 code contains four possible distance-3 arrangements, with one example shown in the red outline (left). As improvements were made, the d=5 fidelity increased faster than that of the d=3, eventually overtaking the distance-3 code, as shown in the top-right data points (right), whose average lies slightly to the left of the ε3 = ε5 line.
The results of these experiments are shown above on the right. Continued improvements over several months allowed us to reduce the logical errors of both grids, leading to the distance-5 grid (ε5 = 2.914%) outperforming the distance-3 grids (ε3 = 3.028%) by 4% (𝚲3,5 = 1.04) with 5𝛔 confidence. While this might seem like a small improvement, it’s important to emphasize that the result represents a first for the field since Peter Shor’s 1995 QEC proposal. A larger code outperforming a smaller one is a key signature of QEC, and all quantum computing architectures will need to pass this hurdle to realize a path to the low errors that are necessary for quantum applications.
The path forward
These results indicate that we are entering a new era of practical QEC. The Google Quantum AI team has spent the last few years thinking about how we define success in this new era, and how we measure progress along the way.
The ultimate goal is to demonstrate a pathway to achieving the low errors needed for using quantum computers in meaningful applications. To this end, our target remains achieving logical error rates of 1 in 106 or lower per cycle of QEC. In the figure below on the left, we outline the path that we anticipate to reach this target. As we continue improving our physical qubits (and hence the performance of our logical qubits), we expect to gradually increase 𝚲 from close to 1 in this work to larger numbers. The figure below shows that a value of 𝚲 = 4 and a code distance of 17 (577 physical qubits with good enough quality) will yield a logical error rate below our target of 1 in 106.
While this result is still a few years out, we have an experimental technique to probe error rates this low with today’s hardware, albeit in limited circumstances. While two-dimensional surface codes allow us to correct both bit- and phase-flip errors, we can also construct one-dimensional repetition codes that are only able to solve one type of error with relaxed requirements. On the right below, we show that a distance-25 repetition code can reach error rates per cycle close to 1 in 106. At such low errors, we see new kinds of error mechanisms that are not yet observable with our surface codes. By controlling for these error mechanisms, we can improve repetition codes to error rates near 1 in 107.
Left: Expected progression as we improve performance (quantified by 𝚲) and scale (quantified by code distance) for surface codes. Right: Experimentally measured logical error rates per cycle versus the distance of one-dimensional repetition codes and two-dimensional surface codes.
Reaching this milestone reflects three years of focused work by the entire Google Quantum AI team following our demonstration of a quantum computer outperforming a classical computer. In our march toward building fault-tolerant quantum computers, we will continue to use the target error rates in the figure above to measure our progress. With further improvements toward our next milestone, we anticipate entering the fault-tolerant regime, where we can exponentially suppress logical errors and unlock the first useful error-corrected quantum applications. In the meantime, we continue to explore various ways of solving problems using quantum computers in topics ranging from condensed matter physics to chemistry, machine learning, and materials science.
Posted by Hartmut Neven, VP of Engineering, and Julian Kelly, Director of Quantum Hardware, on behalf of the Google Quantum AI Team Many years from today, scientists will be able to use fault-tolerant quantum computers for large-scale computations with applications across science and industry. These quantum computers will be much bigger than today, consisting of millions of coherent quantum bits, or qubits. But there’s a catch — these basic building blocks must be good enough or the systems will be overrun with errors. Currently, the error rates of the qubits on our 3rd generation Sycamore processor are typically between 1 in 10,000 to 1 in 100. Through our work and that of others, we understand that developing large-scale quantum computers will require far lower error rates. We will need rates in the range of 1 in 109 to 1 in 106 to run quantum circuits that can solve industrially relevant problems. So how do we get there, knowing that squeezing three to six orders of magnitude of better performance from our current physical qubits is unlikely? Our team has created a roadmap that has directed our research for the last several years, improving the performance of our quantum computers in gradual steps toward a fault-tolerant quantum computer. Roadmap for building a useful error-corrected quantum computer with key milestones. We are currently building one logical qubit that we will scale in the future. Today, in “Suppressing Quantum Errors by Scaling a Surface Code Logical Qubit”, published in Nature, we are announcing that we have reached the second milestone on our roadmap. Our experimental results demonstrate a prototype of the basic unit of an error-corrected quantum computer known as a logical qubit, with performance nearing the regime that enables scalable fault-tolerant quantum computing. From physical qubits to logical qubitsQuantum error correction (QEC) represents a significant shift from today’s quantum computing, where each physical qubit on the processor acts as a unit of computation. It provides the recipe to reach low errors by trading many good qubits for an excellent one: information is encoded across several physical qubits to construct a single logical qubit that is more resilient and capable of running large-scale quantum algorithms. Under the right conditions, the more physical qubits used to build a logical qubit, the better that logical qubit becomes. However, this will not work if the added errors from each additional physical qubit outweigh the benefits of QEC. Until now, the high physical error rates have always won out. To that end, we use a particular error-correcting code called a surface code and show for the first time that increasing the size of the code decreases the error rate of the logical qubit. A first-ever for any quantum computing platform, this was achieved by painstakingly mitigating many error sources as we scaled from 17 to 49 physical qubits. This work is evidence that with enough care, we can produce the logical qubits necessary for a large-scale error-corrected quantum computer. Quantum error correction with surface codesHow does an error-correcting code protect information? Take a simple example from classical communication: Bob wants to send Alice a single bit that reads “1” across a noisy communication channel. Recognizing that the message is lost if the bit flips to “0”, Bob instead sends three bits: “111”. If one erroneously flips, Alice could take a majority vote (a simple error-correcting code) of all the received bits and still understand the intended message. Repeating the information more than three times — increasing the “size” of the code — would enable the code to tolerate more individual errors. Many physical qubits on a quantum processor acting as one logical qubit in an error-correcting code called a surface code. A surface code takes this principle and imagines a practical quantum implementation. It has to satisfy two additional constraints. First, the surface code must be able to correct not just bit flips, taking a qubit from |0⟩ to |1⟩, but also phase flips. This error is unique to quantum states and transforms a qubit in a superposition state, for example from “|0⟩ + |1⟩” to “|0⟩ – |1⟩”. Second, checking the qubits’ states would destroy their superpositions, so one needs a way of detecting errors without measuring the states directly. To address these constraints, we arrange two types of qubits on a checkerboard. “Data” qubits on the vertices make up the logical qubit, while “measure” qubits at the center of each square are used for so-called “stabilizer measurements.” These measurements tell us whether the qubits are all the same, as desired, or different, signaling that an error occurred, without actually revealing the value of the individual data qubits. We tile two types of stabilizer measurements in a checkerboard pattern to protect the logical data from bit- and phase-flips. If some of the stabilizer measurements register an error, then correlations in the stabilizer measurements are used to identify which error(s) occurred and where. Surface-code QEC. Data qubits (yellow) are at the vertices of a checkerboard. Measure qubits at the center of each square are used for stabilizer measurements (blue squares). Dark blue squares check for bit-flip errors, while light blue squares check for phase-flip errors. Left: A phase-flip error. The two nearest light blue stabilizer measurements register the error (light red). Right: A bit-flip error. The two nearest dark blue stabilizer measurements register the error (dark red). Just as Bob’s message to Alice in the example above became more robust against errors with increasing code size, a larger surface code better protects the logical information it contains. The surface code can withstand a number of bit- and phase-flip errors each equal to less than half the distance, where the distance is the number of data qubits that span the surface code in either dimension. But here’s the problem: every individual physical qubit is prone to errors, so the more qubits in a code, the more opportunity for errors. We want the higher protection offered by QEC to outweigh the increased opportunities for errors as we increase the number of qubits. For this to happen, the physical qubits must have errors below the so-called “fault-tolerant threshold.” For the surface code, this threshold is quite low. So low that it hasn’t been experimentally feasible until recently. We are now on the precipice of reaching this coveted regime. Making and controlling high-quality physical qubitsEntering the regime where QEC improves with scale required improving every aspect of our quantum computers, from nanofabrication of the physical qubits to the optimized control of the full quantum system. These experiments ran on a state-of-the-art 3rd generation Sycamore processor architecture optimized for QEC using the surface code with improvements across the board: Increased qubit relaxation and dephasing lifetimes through an improved fabrication process and environmental noise reduction near the quantum processor. Lowered cross-talk between all physical qubits during parallel operation by optimizing quantum processor circuit design and nanofabrication. Reduced drift and improved qubit control fidelity through upgraded custom electronics. Implemented faster and higher-fidelity readout and reset operations compared with previous generations of the Sycamore processor. Reduced calibration errors by extensively modeling the full quantum system and employing better system-optimization algorithms. Developed context-aware and fully parallel calibrations to minimize drift and optimize control parameters for QEC circuits. Enhanced dynamical decoupling protocols to protect physical qubits from noise and cross-talk during idling operations. Running surface code circuitsWith these upgrades in place, we ran experiments to compare the ratio (𝚲3,5) between the logical error rate of a distance-3 surface code (ε3) with 17 qubits to that of a distance-5 surface code (ε5) with 49 qubits — 𝚲3,5 = ε3 / ε5. Comparison of logical fidelity (defined as 1-ε) between distance-3 (d=3) and distance-5 (d=5) surface codes. The distance-5 code contains four possible distance-3 arrangements, with one example shown in the red outline (left). As improvements were made, the d=5 fidelity increased faster than that of the d=3, eventually overtaking the distance-3 code, as shown in the top-right data points (right), whose average lies slightly to the left of the ε3 = ε5 line. The results of these experiments are shown above on the right. Continued improvements over several months allowed us to reduce the logical errors of both grids, leading to the distance-5 grid (ε5 = 2.914%) outperforming the distance-3 grids (ε3 = 3.028%) by 4% (𝚲3,5 = 1.04) with 5𝛔 confidence. While this might seem like a small improvement, it’s important to emphasize that the result represents a first for the field since Peter Shor’s 1995 QEC proposal. A larger code outperforming a smaller one is a key signature of QEC, and all quantum computing architectures will need to pass this hurdle to realize a path to the low errors that are necessary for quantum applications. The path forwardThese results indicate that we are entering a new era of practical QEC. The Google Quantum AI team has spent the last few years thinking about how we define success in this new era, and how we measure progress along the way. The ultimate goal is to demonstrate a pathway to achieving the low errors needed for using quantum computers in meaningful applications. To this end, our target remains achieving logical error rates of 1 in 106 or lower per cycle of QEC. In the figure below on the left, we outline the path that we anticipate to reach this target. As we continue improving our physical qubits (and hence the performance of our logical qubits), we expect to gradually increase 𝚲 from close to 1 in this work to larger numbers. The figure below shows that a value of 𝚲 = 4 and a code distance of 17 (577 physical qubits with good enough quality) will yield a logical error rate below our target of 1 in 106. While this result is still a few years out, we have an experimental technique to probe error rates this low with today’s hardware, albeit in limited circumstances. While two-dimensional surface codes allow us to correct both bit- and phase-flip errors, we can also construct one-dimensional repetition codes that are only able to solve one type of error with relaxed requirements. On the right below, we show that a distance-25 repetition code can reach error rates per cycle close to 1 in 106. At such low errors, we see new kinds of error mechanisms that are not yet observable with our surface codes. By controlling for these error mechanisms, we can improve repetition codes to error rates near 1 in 107. Left: Expected progression as we improve performance (quantified by 𝚲) and scale (quantified by code distance) for surface codes. Right: Experimentally measured logical error rates per cycle versus the distance of one-dimensional repetition codes and two-dimensional surface codes. Reaching this milestone reflects three years of focused work by the entire Google Quantum AI team following our demonstration of a quantum computer outperforming a classical computer. In our march toward building fault-tolerant quantum computers, we will continue to use the target error rates in the figure above to measure our progress. With further improvements toward our next milestone, we anticipate entering the fault-tolerant regime, where we can exponentially suppress logical errors and unlock the first useful error-corrected quantum applications. In the meantime, we continue to explore various ways of solving problems using quantum computers in topics ranging from condensed matter physics to chemistry, machine learning, and materials science. Read More Quantum Computing