Do you want to publish a course? Click here

Generalized swap networks for near-term quantum computing

149   0   0.0 ( 0 )
 Added by Bryan O'Gorman
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

The practical use of many types of near-term quantum computers requires accounting for their limited connectivity. One way of overcoming limited connectivity is to insert swaps in the circuit so that logical operations can be performed on physically adjacent qubits, which we refer to as solving the `routing via matchings problem. We address the routing problem for families of quantum circuits defined by a hypergraph wherein each hyperedge corresponds to a potential gate. Our main result is that any unordered set of $k$-qubit gates on distinct $k$-qubit subsets of $n$ logical qubits can be ordered and parallelized in $O(n^{k-1})$ depth using a linear arrangement of $n$ physical qubits; the construction is completely general and achieves optimal scaling in the case where gates acting on all $binom{n}{k}$ sets of $k$ qubits are desired. We highlight two classes of problems for which our method is particularly useful. First, it applies to sets of mutually commuting gates, as in the (diagonal) phase separators of Quantum Alternating Operator Ansatz (Quantum Approximate Optimization Algorithm) circuits. For example, a single level of a QAOA circuit for Maximum Cut can be implemented in linear depth, and a single level for $3$-SAT in quadratic depth. Second, it applies to sets of gates that do not commute but for which compilation efficiency is the dominant criterion in their ordering. In particular, it can be adapted to Trotterized time-evolution of fermionic Hamiltonians under the Jordan-Wigner transformation, and also to non-standard mixers in QAOA. Using our method, a single Trotter step of the electronic structure Hamiltonian in an arbitrary basis of $n$ orbitals can be done in $O(n^3)$ depth while a Trotter step of the unitary coupled cluster singles and doubles method can be implemented in $O(n^2 eta)$ depth, where $eta$ is the number of electrons.



rate research

Read More

Recent computations involving quantum processing units (QPUs) have demonstrated a series of challenges inherent to hybrid classical-quantum programming, compilation, execution, and verification and validation. Despite considerable progress, system-level noise, limited low-level instructions sets, remote access models, and an overall lack of portability and classical integration presents near-term programming challenges that must be overcome in order to enable reliable scientific quantum computing and support robust hardware benchmarking. In this work, we draw on our experience in programming QPUs to identify common concerns and challenges, and detail best practices for mitigating these challenge within the current hybrid classical-quantum computing paradigm. Following this discussion, we introduce the XACC quantum compilation and execution framework as a hardware and language agnostic solution that addresses many of these hybrid programming challenges. XACC supports extensible methodologies for managing a variety of programming, compilation, and execution concerns across the increasingly diverse set of QPUs. We use recent nuclear physics simulations to illustrate how the framework mitigates programming, compilation, and execution challenges and manages the complex workflow present in QPU-enhanced scientific applications. Finally, we codify the resulting hybrid scientific computing workflow in order to identify key areas requiring future improvement.
With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising killer applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on What would you do with 1000 qubits?, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learning, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.
We present a synthesis framework to map logic networks into quantum circuits for quantum computing. The synthesis framework is based on LUT networks (lookup-table networks), which play a key role in conventional logic synthesis. Establishing a connection between LUTs in a LUT network and reversible single-target gates in a reversible network allows us to bridge conventional logic synthesis with logic synthesis for quantum computing, despite several fundamental differences. We call our synthesis framework LUT-based Hierarchical Reversible Logic Synthesis (LHRS). Input to LHRS is a classical logic network; output is a quantum network (realized in terms of Clifford+$T$ gates). The framework offers to trade-off the number of qubits for the number of quantum gates. In a first step, an initial network is derived that only consists of single-target gates and already completely determines the number of qubits in the final quantum network. Different methods are then used to map each single-target gate into Clifford+$T$ gates, while aiming at optimally using available resources. We demonstrate the effectiveness of our method in automatically synthesizing IEEE compliant floating point networks up to double precision. As many quantum algorithms target scientific simulation applications, they can make rich use of floating point arithmetic components. But due to the lack of quantum circuit descriptions for those components, it can be difficult to find a realistic cost estimation for the algorithms. Our synthesized benchmarks provide cost estimates that allow quantum algorithm designers to provide the first complete cost estimates for a host of quantum algorithms. Thus, the benchmarks and, more generally, the LHRS framework are an essential step towards the goal of understanding which quantum algorithms will be practical in the first generations of quantum computers.
The concept of quantum computing has inspired a whole new generation of scientists, including physicists, engineers, and computer scientists, to fundamentally change the landscape of information technology. With experimental demonstrations stretching back more than two decades, the quantum computing community has achieved a major milestone over the past few years: the ability to build systems that are stretching the limits of what can be classically simulated, and which enable cloud-based research for a wide range of scientists, thus increasing the pool of talent exploring early quantum systems. While such noisy near-term quantum computing systems fall far short of the requirements for fault-tolerant systems, they provide unique testbeds for exploring the opportunities for quantum applications. Here we highlight the facets associated with these systems, including quantum software, cloud access, benchmarking quantum systems, error correction and mitigation in such systems, and understanding the complexity of quantum circuits and how early quantum applications can run on near term quantum computers.
We present a number of quantum computing patterns that build on top of fundamental algorithms, that can be applied to solving concrete, NP-hard problems. In particular, we introduce the concept of a quantum dictionary as a summation of multiple patterns and algorithms, and show how it can be applied in the context of Quadratic Unconstrained Binary Optimization (QUBO) problems. We start by presenting a visual approach to quantum computing, which avoids a heavy-reliance on quantum mechanics, linear algebra, or complex mathematical notation, and favors geometrical intuition and computing paradigms. We also provide insights on the fundamental quantum computing algorithms (Fourier Transforms, Phase Estimation, Grover, Quantum Counting, and Amplitude Estimation).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا