With a quick pulse of light, researchers can now find and eliminate errors in real time.
Researchers have developed a method that can reveal the location of errors in quantum computers, making them up to ten times easier to fix. This will significantly accelerate progress toward large-scale quantum computers capable of solving the world’s most complex computing problems, researchers say.
Directed by Princeton UniversityAccording to Jeff Thompson, the team demonstrated a way to identify errors that occur in quantum computers more easily than ever before. This is a new direction for research on Quantum computing hardware, which most of the time simply seeks to reduce the probability of an error occurring.
Innovative approach to quantum computing
A paper detailing the new approach was recently published in the journal Nature. Thompson’s collaborators include Shruti Puri in Yale University and Guido Pupillo from the University of Strasbourg.
Physicists have been inventing new qubits – the essential component of quantum computers – for nearly three decades and regularly improving these qubits to make them less fragile and less prone to errors. But some errors are inevitable, no matter how good the qubits are. The main obstacle to the future development of quantum computers is the ability to correct these errors. However, to correct an error, you must first determine whether an error occurred and where it is in the data. And usually the process of finding errors introduces more errors, which need to be found, and so on.
Quantum computers’ ability to handle these inevitable errors has remained more or less stagnant during this long period, according to Thompson, associate professor of electrical and computer engineering. He realized that it was possible to distort certain types of errors.
“Not all mistakes are created equal,” he said.
Advances in quantum error correction
Thompson’s lab is working on a type of quantum computer based on neutral atoms. Inside the ultra-high vacuum chamber that defines the computer, qubits are stored in individual spinning ytterbium atoms held in place by focused laser beams called optical tweezers. In this work, a team led by graduate student Shuo Ma used an array of 10 qubits to characterize the probability of errors occurring, first manipulating each qubit in isolation and then manipulating pairs of qubits together.
They found error rates close to the state of the art for a system of this type: 0.1% per operation for single qubits and 2% per operation for pairs of qubits.
However, the main result of the study is not only the low error rates, but also a different way to characterize them without destroying the qubits. By using a different set of energy levels within the atom To store the qubit, compared to previous work, the researchers were able to monitor the qubits during the calculation to detect errors occurring in real time. This measurement causes qubits with errors to emit a flash of light, while qubits without errors remain dark and are unaffected.
This process converts errors into a type of error called an erase error. Erasure errors have been studied in the context of qubits made of photons and have long been known to be simpler to correct than errors in unknown locations, Thompson said. However, this work is the first time that the erasure error model has been applied to matter-based qubits. This follows a theoretical proposal from last year by Thompson, Puri and Shimon Kolkowitz of the University of California, Berkeley.
In the demonstration, about 56% of errors in one qubit and 33% of errors in two qubits were detectable before the end of the experiment. Importantly, error checking does not cause significantly more errors: researchers showed that checking increased the error rate by less than 0.001%. According to Thompson, the fraction of errors detected can be improved with additional engineering.
Important results and future implications
The researchers estimate that with this new approach, nearly 98% of all errors should be detectable using optimized protocols. This could reduce the computational costs of implementing error correction by an order of magnitude or more.
Other groups have already started to adapt this new error detection architecture. Researchers at Amazon Web Services and a separate group at Yale have independently shown how this new paradigm can also improve systems that use superconducting qubits.
“We need progress in many different areas to enable useful quantum computing at scale. One of the challenges of systems engineering is that the advances you propose do not always result in a constructive way. They can pull you in different directions,” Thompson said. “What’s interesting about erasure conversion is that it can be used on many different qubits and computer architectures, so it can be flexibly deployed in combination with other developments. »
Other authors on the paper “High-fidelity gates with half-circuit erasure conversion on a metastable neutral atom qubit” include Shuo Ma, Genyue Liu, Pai Peng, Bichen Zhang, and Alex P. Burgers at Princeton; Sven Jandura in Strasbourg; and Jahan Claes at Yale. This work was supported in part by the Army Research Office, Office of Naval Research, DARPAthe National Science Foundation and the Sloan Foundation.