No Arabic abstract
The concept of quantum computing has inspired a whole new generation of scientists, including physicists, engineers, and computer scientists, to fundamentally change the landscape of information technology. With experimental demonstrations stretching back more than two decades, the quantum computing community has achieved a major milestone over the past few years: the ability to build systems that are stretching the limits of what can be classically simulated, and which enable cloud-based research for a wide range of scientists, thus increasing the pool of talent exploring early quantum systems. While such noisy near-term quantum computing systems fall far short of the requirements for fault-tolerant systems, they provide unique testbeds for exploring the opportunities for quantum applications. Here we highlight the facets associated with these systems, including quantum software, cloud access, benchmarking quantum systems, error correction and mitigation in such systems, and understanding the complexity of quantum circuits and how early quantum applications can run on near term quantum computers.
The purpose of this paper is to explore the applications of quantum computing to energy systems optimization problems and discuss some of the challenges faced by quantum computers with techniques to overcome them. The basic concepts underlying quantum computation and their distinctive characteristics in comparison to their classical counterparts are also discussed. Along with different hardware architecture description of two commercially available quantum systems, an example making use of open-source software tools is provided as a first step for diving into the new realm of programming quantum computers for solving systems optimization problems. The trade-off between qualities of these two quantum architectures is also discussed. Complex nature of energy systems due to their structure and large number of design and operational constraints make energy systems optimization a hard problem for most available algorithms. Problems like facility location allocation for energy systems infrastructure development, unit commitment of electric power systems operations, and heat exchanger network synthesis which fall under the category of energy systems optimization are solved using both classical algorithms implemented on conventional CPU based computer and quantum algorithm realized on quantum computing hardware. Their designs, implementation and results are stated. Additionally, this paper describes the limitations of state-of-the-art quantum computers and their great potential to impact the field of energy systems optimization.
With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising killer applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on What would you do with 1000 qubits?, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learning, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.
Recent computations involving quantum processing units (QPUs) have demonstrated a series of challenges inherent to hybrid classical-quantum programming, compilation, execution, and verification and validation. Despite considerable progress, system-level noise, limited low-level instructions sets, remote access models, and an overall lack of portability and classical integration presents near-term programming challenges that must be overcome in order to enable reliable scientific quantum computing and support robust hardware benchmarking. In this work, we draw on our experience in programming QPUs to identify common concerns and challenges, and detail best practices for mitigating these challenge within the current hybrid classical-quantum computing paradigm. Following this discussion, we introduce the XACC quantum compilation and execution framework as a hardware and language agnostic solution that addresses many of these hybrid programming challenges. XACC supports extensible methodologies for managing a variety of programming, compilation, and execution concerns across the increasingly diverse set of QPUs. We use recent nuclear physics simulations to illustrate how the framework mitigates programming, compilation, and execution challenges and manages the complex workflow present in QPU-enhanced scientific applications. Finally, we codify the resulting hybrid scientific computing workflow in order to identify key areas requiring future improvement.
The practical use of many types of near-term quantum computers requires accounting for their limited connectivity. One way of overcoming limited connectivity is to insert swaps in the circuit so that logical operations can be performed on physically adjacent qubits, which we refer to as solving the `routing via matchings problem. We address the routing problem for families of quantum circuits defined by a hypergraph wherein each hyperedge corresponds to a potential gate. Our main result is that any unordered set of $k$-qubit gates on distinct $k$-qubit subsets of $n$ logical qubits can be ordered and parallelized in $O(n^{k-1})$ depth using a linear arrangement of $n$ physical qubits; the construction is completely general and achieves optimal scaling in the case where gates acting on all $binom{n}{k}$ sets of $k$ qubits are desired. We highlight two classes of problems for which our method is particularly useful. First, it applies to sets of mutually commuting gates, as in the (diagonal) phase separators of Quantum Alternating Operator Ansatz (Quantum Approximate Optimization Algorithm) circuits. For example, a single level of a QAOA circuit for Maximum Cut can be implemented in linear depth, and a single level for $3$-SAT in quadratic depth. Second, it applies to sets of gates that do not commute but for which compilation efficiency is the dominant criterion in their ordering. In particular, it can be adapted to Trotterized time-evolution of fermionic Hamiltonians under the Jordan-Wigner transformation, and also to non-standard mixers in QAOA. Using our method, a single Trotter step of the electronic structure Hamiltonian in an arbitrary basis of $n$ orbitals can be done in $O(n^3)$ depth while a Trotter step of the unitary coupled cluster singles and doubles method can be implemented in $O(n^2 eta)$ depth, where $eta$ is the number of electrons.
Computing has dramatically changed nearly every aspect of our lives, from business and agriculture to communication and entertainment. As a nation, we rely on computing in the design of systems for energy, transportation and defense; and computing fuels scientific discoveries that will improve our fundamental understanding of the world and help develop solutions to major challenges in health and the environment. Computing has changed our world, in part, because our innovations can run on computers whose performance and cost-performance has improved a million-fold over the last few decades. A driving force behind this has been a repeated doubling of the transistors per chip, dubbed Moores Law. A concomitant enabler has been Dennard Scaling that has permitted these performance doublings at roughly constant power, but, as we will see, both trends face challenges. Consider for a moment the impact of these two trends over the past 30 years. A 1980s supercomputer (e.g. a Cray 2) was rated at nearly 2 Gflops and consumed nearly 200 KW of power. At the time, it was used for high performance and national-scale applications ranging from weather forecasting to nuclear weapons research. A computer of similar performance now fits in our pocket and consumes less than 10 watts. What would be the implications of a similar computing/power reduction over the next 30 years - that is, taking a petaflop-scale machine (e.g. the Cray XK7 which requires about 500 KW for 1 Pflop (=1015 operations/sec) performance) and repeating that process? What is possible with such a computer in your pocket? How would it change the landscape of high capacity computing? In the remainder of this paper, we articulate some opportunities and challenges for dramatic performance improvements of both personal to national scale computing, and discuss some out of the box possibilities for achieving computing at this scale.