@kirk86

What limits the simulation of quantum computers?

, , and . (2020)cite arxiv:2002.07730Comment: 12 pages, 12 figures.

Abstract

It is well established that simulating a perfect quantum computer with a classical computer requires computing resources that scale exponentially with the number of qubits $N$ or the depth $D$ of the circuit. Conversely, a perfect quantum computer could potentially provide an exponential speed up with respect to classical hardware. Real quantum computers however are not perfect: they are characterized by a small error rate $\epsilon$ per operation, so that the fidelity of the many-qubit quantum state decays exponentially as $ F (1-\epsilon)^ND$. Here, we discuss a set of classical algorithms based on matrix product states (MPS) which closely mimic the behavior of actual quantum computers. These algorithms require resources that scale linearly in $N$ and $D$ at the cost of making a small error $\epsilon$ per two-qubit gate. We illustrate our algorithms with simulations of random circuits for qubits connected in both one and two dimensional lattices. We find that $\epsilon$ can be decreased at a polynomial cost in computing power down to a minimum error $\epsilon_ınfty$. Getting below $\epsilon_ınfty$ requires computing resources that increase exponentially with $\epsilon_ınfty/\epsilon$. For a two dimensional array of $N=54$ qubits and a circuit with Control-Z gates of depth $D=20$, a fidelity $F0.002$ can be reached on a single core computer in a few hours. It is remarkable that such a high fidelity can be obtained using a variational ansatz that only spans a tiny fraction $(10^-8)$ of the full Hilbert space. Our results show how the actual computing power harvested by noisy quantum computers is controlled by the error rate $\epsilon$.

Description

[2002.07730] What limits the simulation of quantum computers?

Links and resources

Tags