ترغب بنشر مسار تعليمي؟ اضغط هنا

Measurement-based adaptation protocol with quantum reinforcement learning in a Rigetti quantum computer

148   0   0.0 ( 0 )
 نشر من قبل Lucas Lamata
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present an experimental realization of a measurement-based adaptation protocol with quantum reinforcement learning in a Rigetti cloud quantum computer. The experiment in this few-qubit superconducting chip faithfully reproduces the theoretical proposal, setting the first steps towards a semiautonomous quantum agent. This experiment paves the way towards quantum reinforcement learning with superconducting circuits.



قيم البحث

اقرأ أيضاً

We study the quantum synchronization between a pair of two-level systems inside two coupled cavities. By using a digital-analog decomposition of the master equation that rules the system dynamics, we show that this approach leads to quantum synchroni zation between both two-level systems. Moreover, we can identify in this digital-analog block decomposition the fundamental elements of a quantum machine learning protocol, in which the agent and the environment (learning units) interact through a mediating system, namely, the register. If we can additionally equip this algorithm with a classical feedback mechanism, which consists of projective measurements in the register, reinitialization of the register state and local conditional operations on the agent and environment subspace, a powerful and flexible quantum machine learning protocol emerges. Indeed, numerical simulations show that this protocol enhances the synchronization process, even when every subsystem experience different loss/decoherence mechanisms, and give us the flexibility to choose the synchronization state. Finally, we propose an implementation based on current technologies in superconducting circuits.
147 - X.-D. Cai , D. Wu , Z.-E. Su 2014
Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that mac hine learning with the rapidly growing big data could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] was proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of 2-, 4-, and 8-dimensional vectors to different clusters using a small-scale photonic quantum computer, which is then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can in principle be scaled to a larger number of qubits, and may provide a new route to accelerate machine learning.
Phosphorus donor spins in silicon offer a number of promising characteristics for the implementation of robust qubits. Amongst various concepts for scale-up, the shared-control concept takes advantage of 3D scanning tunnelling microscope (STM) fabric ation techniques to minimise the number of control lines, allowing the donors to be placed at the pitch limit of $geq$30 nm, enabling dipole interactions. A fundamental challenge is to exploit the faster exchange interaction, however, the donor spacings required are typically 15 nm or less, and the exchange interaction is notoriously sensitive to lattice site variations in donor placement. This work presents a proposal for a fast exchange-based surface-code quantum computer architecture which explicitly addresses both donor placement imprecision commensurate with the atomic-precision fabrication techniques and the stringent qubit pitch requirements. The effective pitch is extended by incorporation of an intermediate donor acting as an exchange-interaction switch. We consider both global control schemes and a scheduled series of operations by designing GRAPE pulses for individual CNOTs based on coupling scenarios predicted by atomistic tight-binding simulations. The architecture is compatible with the existing fabrication capabilities and may serve as a blueprint for the experimental implementation of a full-scale fault-tolerant quantum computer based on donor impurities in silicon.
Spin qubits are contenders for scalable quantum computation because of their long coherence times demonstrated in a variety of materials, but individual control by frequency-selective addressing using pulsed spin resonance creates severe technical ch allenges for scaling up to many qubits. This individual resonance control strategy requires each spin to have a distinguishable frequency, imposing a maximum number of spins that can be individually driven before qubit crosstalk becomes unavoidable. Here we describe a complete strategy for controlling a large array of spins in quantum dots dressed by an on-resonance global field, namely a field that is constantly driving the spin qubits, to dynamically decouple from the effects of background magnetic field fluctuations. This approach -- previously implemented for the control of single electron spins bound to electrons in impurities -- is here harmonized with all other operations necessary for universal quantum computing with spins in quantum dots. We define the logical states as the dressed qubit states and discuss initialization and readout utilizing Pauli spin blockade, as well as single- and two-qubit control in the new basis. Finally, we critically analyze the limitations imposed by qubit variability and potential strategies to improve performance.
Prediction of financial crashes in a complex financial network is known to be an NP-hard problem, which means that no known algorithm can guarantee to find optimal solutions efficiently. We experimentally explore a novel approach to this problem by u sing a D-Wave quantum computer, benchmarking its performance for attaining financial equilibrium. To be specific, the equilibrium condition of a nonlinear financial model is embedded into a higher-order unconstrained binary optimization (HUBO) problem, which is then transformed to a spin-$1/2$ Hamiltonian with at most two-qubit interactions. The problem is thus equivalent to finding the ground state of an interacting spin Hamiltonian, which can be approximated with a quantum annealer. The size of the simulation is mainly constrained by the necessity of a large quantity of physical qubits representing a logical qubit with the correct connectivity. Our experiment paves the way to codify this quantitative macroeconomics problem in quantum computers.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا