No Arabic abstract
Within the last decade much progress has been made in the experimental realisation of quantum computing hardware based on a variety of physical systems. Rapid progress has been fuelled by the conviction that sufficiently powerful quantum machines will herald enormous computational advantages in many fields, including chemical research. A quantum computer capable of simulating the electronic structures of complex molecules would be a game changer for the design of new drugs and materials. Given the potential implications of this technology, there is a need within the chemistry community to keep abreast with the latest developments as well as becoming involved in experimentation with quantum prototypes. To facilitate this, here we review the types of quantum computing hardware that have been made available to the public through cloud services. We focus on three architectures, namely superconductors, trapped ions and semiconductors. For each one we summarise the basic physical operations, requirements and performance. We discuss to what extent each system has been used for molecular chemistry problems and highlight the most pressing hardware issues to be solved for a chemistry-relevant quantum advantage to eventually emerge.
We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fedback or fedforward within a fraction of the qubits coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow on a fraction superconducting qubit coherence times. Both readout and control platforms make extensive use of FPGAs to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.
The quantum computation of electronic energies can break the curse of dimensionality that plagues many-particle quantum mechanics. It is for this reason that a universal quantum computer has the potential to fundamentally change computational chemistry and materials science, areas in which strong electron correlations present severe hurdles for traditional electronic structure methods. Here, we present a state-of-the-art analysis of accurate energy measurements on a quantum computer for computational catalysis, using improved quantum algorithms with more than an order of magnitude improvement over the best previous algorithms. As a prototypical example of local catalytic chemical reactivity we consider the case of a ruthenium catalyst that can bind, activate, and transform carbon dioxide to the high-value chemical methanol. We aim at accurate resource estimates for the quantum computing steps required for assessing the electronic energy of key intermediates and transition states of its catalytic cycle. In particular, we present new quantum algorithms for double-factorized representations of the four-index integrals that can significantly reduce the computational cost over previous algorithms, and we discuss the challenges of increasing active space sizes to accurately deal with dynamical correlations. We address the requirements for future quantum hardware in order to make a universal quantum computer a successful and reliable tool for quantum computing enhanced computational materials science and chemistry, and identify open questions for further research.
Heterogeneous high-performance computing (HPC) systems offer novel architectures which accelerate specific workloads through judicious use of specialized coprocessors. A promising architectural approach for future scientific computations is provided by heterogeneous HPC systems integrating quantum processing units (QPUs). To this end, we present XACC (eXtreme-scale ACCelerator) --- a programming model and software framework that enables quantum acceleration within standard or HPC software workflows. XACC follows a coprocessor machine model that is independent of the underlying quantum computing hardware, thereby enabling quantum programs to be defined and executed on a variety of QPUs types through a unified application programming interface. Moreover, XACC defines a polymorphic low-level intermediate representation, and an extensible compiler frontend that enables language independent quantum programming, thus promoting integration and interoperability across the quantum programming landscape. In this work we define the software architecture enabling our hardware and language independent approach, and demonstrate its usefulness across a range of quantum computing models through illustrative examples involving the compilation and execution of gate and annealing-based quantum programs.
Extensive quantum error correction is necessary in order to scale quantum hardware to the regime of practical applications. As a result, a significant amount of decoding hardware is necessary to process the colossal amount of data required to constantly detect and correct errors occurring over the millions of physical qubits driving the computation. The implementation of a recent highly optimized version of Shors algorithm to factor a 2,048-bits integer would require more 7 TBit/s of bandwidth for the sole purpose of quantum error correction and up to 20,000 decoding units. To reduce the decoding hardware requirements, we propose a fault-tolerant quantum computing architecture based on surface codes with a cheap hard-decision decoder, the lazy decoder, combined with a sophisticated decoding unit that takes care of complex error configurations. Our design drops the decoding hardware requirements by several orders of magnitude assuming that good enough qubits are provided. Given qubits and quantum gates with a physical error rate $p=10^{-4}$, the lazy decoder drops both the bandwidth requirements and the number of decoding units by a factor 50x. Provided very good qubits with error rate $p=10^{-5}$, we obtain a 1,500x reduction in bandwidth and decoding hardware thanks to the lazy decoder. Finally, the lazy decoder can be used as a decoder accelerator. Our simulations show a 10x speed-up of the Union-Find decoder and a 50x speed-up of the Minimum Weight Perfect Matching decoder.
Quantum bits have technological imperfections. Additionally, the capacity of a component that can be implemented feasibly is limited. Therefore, distributed quantum computation is required to scale up quantum computers. This dissertation presents a new quantum computer architecture which takes into account imperfections, aimed to realize distributed computation by connecting quantum computers each of which consists of multiple quantum CPUs and memories. Quantum CPUs employ a quantum error correcting code which has faster logical gates and quantum memories employ a code which is superior in space resource requirements. This dissertation focuses on quantum error correcting codes, giving a practical, concrete method for tolerating static losses such as faulty devices for the surface code. Numerical simulation with practical assumptions showed that a yield of functional qubits of 90% is marginally capable of building large-scale systems, by culling the poorer 50% of lattices during post-fabrication testing. Yield 80% is not usable even when culling 90% of generated lattices. For internal connections in a quantum computer and for connections between quantum computers, this dissertation gives a fault-tolerant method that bridges heterogeneous quantum error correcting codes. Numerical simulation showed that the scheme, which discards any quantum state in which any error is detected, always achieves an adequate logical error rate regardless of physical error rates in exchange for increased resource consumption. This dissertation gives a new extension of the surface code suitable for memories. This code is shown to require fewer physical qubits to encode a logical qubit than conventional codes. This code achieves the reduction of 50% physical qubits per a logical qubit. Collectively, the elements to propose the distributed quantum computer architecture are brought together.