No Arabic abstract
Universal quantum computers are the only general purpose quantum computers known that can be implemented as of today. These computers consist of a classical memory component which controls the quantum memory. In this paper, the space complexity of some data stream problems, such as PartialMOD and Equality, is investigated on universal quantum computers. The quantum algorithms for these problems are believed to outperform their classical counterparts. Universal quantum computers, however, need classical bits for controlling quantum gates in addition to qubits. Our analysis shows that the number of classical bits used in quantum algorithms is equal to or even larger than that of classical bits used in corresponding classical algorithms. These results suggest that there is no advantage of implementing certain data stream problems on universal quantum computers instead of classical computers when space complexity is considered.
Traditional algorithms for simulating quantum computers on classical ones require an exponentially large amount of memory, and so typically cannot simulate general quantum circuits with more than about 30 or so qubits on a typical PC-scale platform with only a few gigabytes of main memory. However, more memory-efficient simulations are possible, requiring only polynomial or even linear space in the size of the quantum circuit being simulated. In this paper, we describe one such technique, which was recently implemented at FSU in the form of a C++ program called SEQCSim, which we releasing publicly. We also discuss the potential benefits of this simulation in quantum computing research and education, and outline some possible directions for further progress.
The past few years have witnessed the concrete and fast spreading of quantum technologies for practical computation and simulation. In particular, quantum computing platforms based on either trapped ions or superconducting qubits have become available for simulations and benchmarking, with up to few tens of qubits that can be reliably initialized, controlled, and measured. The present review aims at giving a comprehensive outlook on the state of art capabilities offered from these near-term noisy devices as universal quantum simulators, i.e. programmable quantum computers potentially able to calculate the time evolution of many physical models. First, we give a pedagogic overview on the basic theoretical background pertaining digital quantum simulations, with a focus on hardware-dependent mapping of spin-type Hamiltonians into the corresponding quantum circuit model as a key initial step towards simulating more complex models. Then, we review the main experimental achievements obtained in the last decade regarding the digital quantum simulation of such spin models, mostly employing the two leading quantum architectures. We compare their performances and outline future challenges, also in view of prospective hybrid technologies, towards the ultimate goal of reaching the long sought quantum advantage for the simulation of complex many body models in the physical sciences.
$ ewcommand{eps}{varepsilon} $In learning theory, the VC dimension of a concept class $C$ is the most common way to measure its richness. In the PAC model $$ ThetaBig(frac{d}{eps} + frac{log(1/delta)}{eps}Big) $$ examples are necessary and sufficient for a learner to output, with probability $1-delta$, a hypothesis $h$ that is $eps$-close to the target concept $c$. In the related agnostic model, where the samples need not come from a $cin C$, we know that $$ ThetaBig(frac{d}{eps^2} + frac{log(1/delta)}{eps^2}Big) $$ examples are necessary and sufficient to output an hypothesis $hin C$ whose error is at most $eps$ worse than the best concept in $C$. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting, quantum examples cannot be much more powerful: the required number of quantum examples is $$ OmegaBig(frac{d^{1-eta}}{eps} + d + frac{log(1/delta)}{eps}Big)mbox{ for all }eta> 0. $$ Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a $log(d/eps)$ factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the Pretty Good Measurement on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors.
A universal quantum simulator would enable efficient simulation of quantum dynamics by implementing quantum-simulation algorithms on a quantum computer. Specifically the quantum simulator would efficiently generate qubit-string states that closely approximate physical states obtained from a broad class of dynamical evolutions. I provide an overview of theoretical research into universal quantum simulators and the strategies for minimizing computational space and time costs. Applications to simulating many-body quantum simulation and solving linear equations are discussed.
Symmetry is a unifying concept in physics. In quantum information and beyond, it is known that quantum states possessing symmetry are not useful for certain information-processing tasks. For example, states that commute with a Hamiltonian realizing a time evolution are not useful for timekeeping during that evolution, and bipartite states that are highly extendible are not strongly entangled and thus not useful for basic tasks like teleportation. Motivated by this perspective, this paper details several quantum algorithms that test the symmetry of quantum states and channels. For the case of testing Bose symmetry of a state, we show that there is a simple and efficient quantum algorithm, while the tests for other kinds of symmetry rely on the aid of a quantum prover. We prove that the acceptance probability of each algorithm is equal to the maximum symmetric fidelity of the state being tested, thus giving a firm operational meaning to these latter resource quantifiers. Special cases of the algorithms test for incoherence or separability of quantum states. We evaluate the performance of these algorithms by using the variational approach to quantum algorithms, replacing the quantum prover with a variational circuit. We also show that the maximum symmetric fidelities can be calculated by semi-definite programs, which is useful for benchmarking the performance of the quantum algorithms for sufficiently small examples. Finally, we establish various generalizations of the resource theory of asymmetry, with the upshot being that the acceptance probabilities of the algorithms are resource monotones and thus well motivated from the resource-theoretic perspective.