Researchers developed two new methods to assess and remove error in how scientists measure quantum systems. By reducing quantum “noise” – uncertainty inherent to quantum processes – these new methods improve accuracy and precision. The first method measures the same quantum process multiple times. These measurements each have different noise patterns. The scientists then lay those measurements over each other to make a noise-free estimate. In the second method, scientists worked to reduce sources of error one-by-one on individual quantum bits.
Quantum information science is a rapidly growing field of research. This field uses the unique properties of how physics functions at this extremely small level to develop new technology. But error and noise in quantum computing and sensing limit how well researchers can develop this technology. Applying these new measurement techniques could reduce error and noise without needing to use additional hardware or computational resources.
Scientists at the Center for Nanoscale Materials, a U.S. Department of Energy (DOE) Office of Science user facility at Argonne National Laboratory, have presented two new techniques to improve the measurement of quantum observables, or measurable aspects of quantum systems. Noise in quantum measurements inevitably appears due to decoherence, or the loss of information resulting from interactions between the quantum systems and their surrounding environments. The first technique recovers information through repetition of a single quantum process with varied but controlled noise characteristics. By superimposing the results of each trial, scientists obtain an estimate of the noise-free value of that observable. The method is analogous to superimposing many flawed versions of the same photograph to obtain an estimate of the true image. The results could help extend the usefulness of quantum computers before decoherence sets in. The second approach reduces the error on each individual qubit, or quantum bit, to reduce the overall error on a measured observable. It focuses on individual error sources on each qubit separately and uses that information to approximate the result for all qubits in the system being simultaneously corrected. This method decreases the needed quality of the qubits by up to two orders of magnitude, and its flexibility gives it great potential in quantum sensing, quantum measurement, and other quantum applications.
This work was performed at the Center for Nanoscale Materials, a U.S. Department of Energy Office of Science user facility, and supported by the U.S. Department of Energy, Office of Science. Computing resources were provided by the high-performance computing cluster Bebop operated by the Laboratory Computing Resource Center at Argonne National Laboratory and by the quantum computer Agave operated by Rigetti Computing.
Original post https://alertarticles.info