No Arabic abstract
One of the major application areas of interest for both near-term and fault-tolerant quantum computers is the optimization of classical objective functions. In this work, we develop intuitive constructions for a large class of these algorithms based on connections to simple dynamics of quantum systems, quantum walks, and classical continuous relaxations. We focus on developing a language and tools connected with kinetic energy on a graph for understanding the physical mechanisms of success and failure to guide algorithmic improvement. This physical language, in combination with uniqueness results related to unitarity, allow us to identify some potential pitfalls from kinetic energy fundamentally opposing the goal of optimization. This is connected to effects from wavefunction confinement, phase randomization, and shadow defects lurking in the objective far away from the ideal solution. As an example, we explore the surprising deficiency of many quantum methods in solving uncoupled spin problems and how this is both predictive of performance on some more complex systems while immediately suggesting simple resolutions. Further examination of canonical problems like the Hamming ramp or bush of implications show that entanglement can be strictly detrimental to performance results from the underlying mechanism of solution in approaches like QAOA. Kinetic energy and graph Laplacian perspectives provide new insights to common initialization and optimal solutions in QAOA as well as new methods for more effective layerwise training. Connections to classical methods of continuous extensions, homotopy methods, and iterated rounding suggest new directions for research in quantum optimization. Throughout, we unveil many pitfalls and mechanisms in quantum optimization using a physical perspective, which aim to spur the development of novel quantum optimization algorithms and refinements.
The quantum approximate optimization algorithm (QAOA) is a hybrid quantum-classical variational algorithm which offers the potential to handle combinatorial optimization problems. Introducing constraints in such combinatorial optimization problems poses a major challenge in the extensions of QAOA to support relevant larger scale problems. In this paper, we introduce a quantum machine learning approach to learn the mixer Hamiltonian that is required to hard constrain the search subspace. We show that this method can be used for encoding any general form of constraints. By using a form of an adaptable ansatz, one can directly plug the learnt unitary into the QAOA framework. This procedure gives the flexibility to control the depth of the circuit at the cost of accuracy of enforcing the constraint, thus having immediate application in the Noisy Intermediate Scale Quantum (NISQ) era. We also develop an intuitive metric that uses Wasserstein distance to assess the performance of general approximate optimization algorithms with/without constrains. Finally using this metric, we evaluate the performance of the proposed algorithm.
A scheme for measuring complex temperature partition functions of Ising models is introduced. In the context of ordered qubit registers this scheme finds a natural translation in terms of global operations, and single particle measurements on the edge of the array. Two applications of this scheme are presented. First, through appropriate Wick rotations, those amplitudes can be analytically continued to yield estimates for partition functions of Ising models. Bounds on the estimation error, valid with high confidence, are provided through a central-limit theorem, which validity extends beyond the present context. It holds for example for estimations of the Jones polynomial. Interestingly, the kind of state preparations and measurements involved in this application can in principle be made instantaneous, i.e. independent of the system size or the parameters being simulated. Second, the scheme allows to accurately estimate some non-trivial invariants of links. A third result concerns the computational power of estimations of partition functions for real temperature classical ferromagnetic Ising models on a square lattice. We provide conditions under which estimating such partition functions allows one to reconstruct scattering amplitudes of quantum circuits making the problem BQP-hard. Using this mapping, we show that fidelity overlaps for ground states of quantum Hamiltonians, which serve as a witness to quantum phase transitions, can be estimated from classical Ising model partition functions. Finally, we show that the ability to accurately measure corner magnetizations on thermal states of two-dimensional Ising models with magnetic field leads to fully polynomial random approximation schemes (FPRAS) for the partition function. Each of these results corresponds to a section of the text that can be essentially read independently.
The purpose of this paper is to explore the applications of quantum computing to energy systems optimization problems and discuss some of the challenges faced by quantum computers with techniques to overcome them. The basic concepts underlying quantum computation and their distinctive characteristics in comparison to their classical counterparts are also discussed. Along with different hardware architecture description of two commercially available quantum systems, an example making use of open-source software tools is provided as a first step for diving into the new realm of programming quantum computers for solving systems optimization problems. The trade-off between qualities of these two quantum architectures is also discussed. Complex nature of energy systems due to their structure and large number of design and operational constraints make energy systems optimization a hard problem for most available algorithms. Problems like facility location allocation for energy systems infrastructure development, unit commitment of electric power systems operations, and heat exchanger network synthesis which fall under the category of energy systems optimization are solved using both classical algorithms implemented on conventional CPU based computer and quantum algorithm realized on quantum computing hardware. Their designs, implementation and results are stated. Additionally, this paper describes the limitations of state-of-the-art quantum computers and their great potential to impact the field of energy systems optimization.
In recent years, building deep learning models from optimization perspectives has becoming a promising direction for solving low-level vision problems. The main idea of most existing approaches is to straightforwardly combine numerical iterations with manually designed network architectures to generate image propagations for specific kinds of optimization models. However, these heuristic learning models often lack mechanisms to control the propagation and rely on architecture engineering heavily. To mitigate the above issues, this paper proposes a unified optimization-inspired deep image propagation framework to aggregate Generative, Discriminative and Corrective (GDC for short) principles for a variety of low-level vision tasks. Specifically, we first formulate low-level vision tasks using a generic optimization objective and construct our fundamental propagative modules from three different viewpoints, i.e., the solution could be obtained/learned 1) in generative manner; 2) based on discriminative metric, and 3) with domain knowledge correction. By designing control mechanisms to guide image propagations, we then obtain convergence guarantees of GDC for both fully- and partially-defined optimization formulations. Furthermore, we introduce two architecture augmentation strategies (i.e., normalization and automatic search) to respectively enhance the propagation stability and task/data-adaption ability. Extensive experiments on different low-level vision applications demonstrate the effectiveness and flexibility of GDC.
Random quantum circuits have played a central role in establishing the computational advantages of near-term quantum computers over their conventional counterparts. Here, we use ensembles of low-depth random circuits with local connectivity in $Dge 1$ spatial dimensions to generate quantum error-correcting codes. For random stabilizer codes and the erasure channel, we find strong evidence that a depth $O(log N)$ random circuit is necessary and sufficient to converge (with high probability) to zero failure probability for any finite amount below the optimal erasure threshold, set by the channel capacity, for any $D$. Previous results on random circuits have only shown that $O(N^{1/D})$ depth suffices or that $O(log^3 N)$ depth suffices for all-to-all connectivity ($D to infty$). We then study the critical behavior of the erasure threshold in the so-called moderate deviation limit, where both the failure probability and the distance to the optimal threshold converge to zero with $N$. We find that the requisite depth scales like $O(log N)$ only for dimensions $D ge 2$, and that random circuits require $O(sqrt{N})$ depth for $D=1$. Finally, we introduce an expurgation algorithm that uses quantum measurements to remove logical operators that cause the code to fail by turning them into additional stabilizers or gauge operators. With such targeted measurements, we can achieve sub-logarithmic depth in $Dge 2$ below capacity without increasing the maximum weight of the check operators. We find that for any rate beneath the capacity, high-performing codes with thousands of logical qubits are achievable with depth 4-8 expurgated random circuits in $D=2$ dimensions. These results indicate that finite-rate quantum codes are practically relevant for near-term devices and may significantly reduce the resource requirements to achieve fault tolerance for near-term applications.