ترغب بنشر مسار تعليمي؟ اضغط هنا

Cost function embedding and dataset encoding for machine learning with parameterized quantum circuits

193   0   0.0 ( 0 )
 نشر من قبل Shuxiang Cao
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Machine learning is seen as a promising application of quantum computation. For near-term noisy intermediate-scale quantum (NISQ) devices, parametrized quantum circuits (PQCs) have been proposed as machine learning models due to their robustness and ease of implementation. However, the cost function is normally calculated classically from repeated measurement outcomes, such that it is no longer encoded in a quantum state. This prevents the value from being directly manipulated by a quantum computer. To solve this problem, we give a routine to embed the cost function for machine learning into a quantum circuit, which accepts a training dataset encoded in superposition or an easily preparable mixed state. We also demonstrate the ability to evaluate the gradient of the encoded cost function in a quantum state.



قيم البحث

اقرأ أيضاً

Kernel methods are powerful for machine learning, as they can represent data in feature spaces that similarities between samples may be faithfully captured. Recently, it is realized that machine learning enhanced by quantum computing is closely relat ed to kernel methods, where the exponentially large Hilbert space turns to be a feature space more expressive than classical ones. In this paper, we generalize quantum kernel methods by encoding data into continuous-variable quantum states, which can benefit from the infinite-dimensional Hilbert space of continuous variables. Specially, we propose squeezed-state encoding, in which data is encoded as either in the amplitude or the phase. The kernels can be calculated on a quantum computer and then are combined with classical machine learning, e.g. support vector machine, for training and predicting tasks. Their comparisons with other classical kernels are also addressed. Lastly, we discuss physical implementations of squeezed-state encoding for machine learning in quantum platforms such as trapped ions.
The classification of big data usually requires a mapping onto new data clusters which can then be processed by machine learning algorithms by means of more efficient and feasible linear separators. Recently, Lloyd et al. have advanced the proposal t o embed classical data into quantum ones: these live in the more complex Hilbert space where they can get split into linearly separable clusters. Here, we implement these ideas by engineering two different experimental platforms, based on quantum optics and ultra-cold atoms respectively, where we adapt and numerically optimize the quantum embedding protocol by deep learning methods, and test it for some trial classical data. We perform also a similar analysis on the Rigetti superconducting quantum computer. Therefore, we find that the quantum embedding approach successfully works also at the experimental level and, in particular, we show how different platforms could work in a complementary fashion to achieve this task. These studies might pave the way for future investigations on quantum machine learning techniques especially based on hybrid quantum technologies.
Efficient methods for loading given classical data into quantum circuits are essential for various quantum algorithms. In this paper, we propose an algorithm called that can effectively load all the components of a given real-valued data vector into the amplitude of quantum state, while the previous proposal can only load the absolute values of those components. The key of our algorithm is to variationally train a shallow parameterized quantum circuit, using the results of two types of measurement; the standard computational-basis measurement plus the measurement in the Hadamard-transformed basis, introduced in order to handle the sign of the data components. The variational algorithm changes the circuit parameters so as to minimize the sum of two costs corresponding to those two measurement basis, both of which are given by the efficiently-computable maximum mean discrepancy. We also consider the problem of constructing the singular value decomposition entropy via the stock market dataset to give a financial market indicator; a quantum algorithm (the variational singular value decomposition algorithm) is known to produce a solution faster than classical, which yet requires the sign-dependent amplitude encoding. We demonstrate, with an in-depth numerical analysis, that our algorithm realizes loading of time-series of real stock prices on quantum state with small approximation error, and thereby it enables constructing an indicator of the financial market based on the stock prices.
Parameterized quantum circuits (PQCs), which are essential for variational quantum algorithms, have conventionally been optimized by parameterized rotational angles of single-qubit gates around predetermined set of axes. We propose a new method to op timize a PQC by continuous parameterization of both the angles and the axes of its single-qubit rotations. The method is based on the observation that when rotational angles are fixed, optimal axes of rotations can be computed by solving a system of linear equations whose coefficients can be determined from the PQC with small computational overhead. The method can be further simplified to select axes freely from continuous parameters with rotational angles fixed to $pi$. We show the simplified free-axis selection method has better expressibility against other structural optimization methods when measured with Kullback-Leibler (KL) divergence. We also demonstrate PQCs with free-axis selection are more effective to search the ground states of Heisenberg models and molecular Hamiltonians. Because free-axis selection allows designing PQCs without specifying their single-qubit rotational axes, it may significantly improve the handiness of PQCs.
Noise mitigation and reduction will be crucial for obtaining useful answers from near-term quantum computers. In this work, we present a general framework based on machine learning for reducing the impact of quantum hardware noise on quantum circuits . Our method, called noise-aware circuit learning (NACL), applies to circuits designed to compute a unitary transformation, prepare a set of quantum states, or estimate an observable of a many-qubit state. Given a task and a device model that captures information about the noise and connectivity of qubits in a device, NACL outputs an optimized circuit to accomplish this task in the presence of noise. It does so by minimizing a task-specific cost function over circuit depths and circuit structures. To demonstrate NACL, we construct circuits resilient to a fine-grained noise model derived from gate set tomography on a superconducting-circuit quantum device, for applications including quantum state overlap, quantum Fourier transform, and W-state preparation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا