Quantum Computing: From Bragg Reflections to Decoherence Estimates

  • PDF / 288,539 Bytes
  • 12 Pages / 612 x 792 pts (letter) Page_size
  • 26 Downloads / 218 Views

DOWNLOAD

REPORT


Q8.5.1

QUANTUM COMPUTING: FROM BRAGG REFLECTIONS TO DECOHERENCE ESTIMATES Peter Pfeifer and Chen Hou Department of Physics, University of Missouri Columbia, MO 65211, U.S.A. ABSTRACT We give an exposition of the principles of quantum computing (logic gates, exponential parallelism from polynomial hardware, fast quantum algorithms, quantum error correction, hardware requirements, and experimental milestones). A compact description of the quantum Fourier transform to find the period of a function—the key step in Shor’s factoring algorithm— illustrates how parallel state evolution along many classical computational paths produces fast algorithms by constructive interference similar to Bragg reflections in x-ray crystallography. On the hardware side, we present a new method to estimate critical time scales for the operation of a quantum computer. We derive a universal upper bound on the probability of a computation to fail due to decoherence (entanglement of the computer with the environment), as a function of time. The bound is parameter-free, requiring only the interaction between the computer and the environment, and the time-evolving state in the absence of any interaction. For a simple model we find that the bound performs well and decoherence is small when the energy of the computer state is large compared to the interaction energy. This supports a recent estimate of minimum energy requirements for quantum computation.

INTRODUCTION A quantum computer puts to use the fact that a quantum particle, such as an electron passing through a double-slit apparatus or an atom or molecule traversing a matter-wave interferometer [1], can be in two different locations at the same time. By equating different locations—as a paradigm, we take an electron in the lowest orbit or in an excited orbit of an atom—to binary digits 0 and 1, one may interpret the time-evolving state of the particles as executing several computations at the same time. One set of locations at a given time describes the result of one computation. Thus one atom can do two computations at once; two atoms can do four; three atoms can do eight. The challenge is to coerce the atoms to follow trajectories that amount to meaningful computations and to read out a definite result from the multitude of computations occurring in parallel. The control of trajectories is the hardware part of the challenge; the design of useful trajectories—algorithms that are superior to classical algorithms—is the software part of the challenge. This paper is a comprehensive survey of both aspects. It illustrates in terms of explicit case studies the key components of a quantum computer, namely, the design of a computational trajectory (Shor’s algorithm), the trajectory’s superior performance (exponential parallelism), and the reliable operation of the computer vis-à-vis decoherence, with emphasis on analogies to familiar physical phenomena. The case studies apply regardless of whether the computer is implemented in terms of nuclear spins, ion traps, cavity quantum electrodynamics, sup