ترغب بنشر مسار تعليمي؟ اضغط هنا

Linear Regression by Quantum Amplitude Estimation and its Extension to Convex Optimization

53   0   0.0 ( 0 )
 نشر من قبل Koichi Miyamoto
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Linear regression is a basic and widely-used methodology in data analysis. It is known that some quantum algorithms efficiently perform least squares linear regression of an exponentially large data set. However, if we obtain values of the regression coefficients as classical data, the complexity of the existing quantum algorithms can be larger than the classical method. This is because it depends strongly on the tolerance error $epsilon$: the best one among the existing proposals is $O(epsilon^{-2})$. In this paper, we propose the new quantum algorithm for linear regression, which has the complexity of $O(epsilon^{-1})$ and keeps the logarithmic dependence on the number of data points $N_D$. In this method, we overcome bottleneck parts in the calculation, which take the form of the sum over data points and therefore have the complexity proportional to $N_D$, using quantum amplitude estimation, and other parts classically. Additionally, we generalize our method to some class of convex optimization problems.

قيم البحث

اقرأ أيضاً

In this paper, we study extended linear regression approaches for quantum state tomography based on regularization techniques. For unknown quantum states represented by density matrices, performing measurements under certain basis yields random outco mes, from which a classical linear regression model can be established. First of all, for complete or over-complete measurement bases, we show that the empirical data can be utilized for the construction of a weighted least squares estimate (LSE) for quantum tomography. Taking into consideration the trace-one condition, a constrained weighted LSE can be explicitly computed, being the optimal unbiased estimation among all linear estimators. Next, for general measurement bases, we show that $ell_2$-regularization with proper regularization gain provides even lower mean-square error under a cost in bias. The regularization parameter is tuned by two estimators in terms of a risk characterization. Finally, a concise and unified formula is established for the regularization parameter with complete measurement basis under an equivalent regression model, which proves that the proposed tuning estimators are asymptotically optimal as the number of samples grows to infinity under the risk metric. Additionally, numerical examples are provided to validate the established results.
157 - Koichi Miyamoto 2021
Pricing of financial derivatives, in particular early exercisable options such as Bermudan options, is an important but heavy numerical task in financial institutions, and its speed-up will provide a large business impact. Recently, applications of q uantum computing to financial problems have been started to be investigated. In this paper, we first propose a quantum algorithm for Bermudan option pricing. This method performs the approximation of the continuation value, which is a crucial part of Bermudan option pricing, by Chebyshev interpolation, using the values at interpolation nodes estimated by quantum amplitude estimation. In this method, the number of calls to the oracle to generate underlying asset price paths scales as $widetilde{O}(epsilon^{-1})$, where $epsilon$ is the error tolerance of the option price. This means the quadratic speed-up compared with classical Monte Carlo-based methods such as least-squares Monte Carlo, in which the oracle call number is $widetilde{O}(epsilon^{-2})$.
In this paper we derive from simple and reasonable assumptions a Gaussian noise model for NISQ Quantum Amplitude Estimation (QAE). We provide results from QAE run on various IBM superconducting quantum computers and Honeywells H1 trapped-ion quantum computer to show that the proposed model is a good fit for real-world experimental data. We then give an example of how to embed this noise model into any NISQ QAE algorithm, such that the amplitude estimation is noise-aware.
We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different o racles. In particular, we show how a separation oracle can be implemented using $tilde{O}(1)$ quantum queries to a membership oracle, which is an exponential quantum speed-up over the $Omega(n)$ membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that $tilde{O}(n)$ quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: $Omega(sqrt{n})$ quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and $Omega(n)$ quantum separation queries are needed if it does not.
We explore whether quantum advantages can be found for the zeroth-order online convex optimization problem, which is also known as bandit convex optimization with multi-point feedback. In this setting, given access to zeroth-order oracles (that is, t he loss function is accessed as a black box that returns the function value for any queried input), a player attempts to minimize a sequence of adversarially generated convex loss functions. This procedure can be described as a $T$ round iterative game between the player and the adversary. In this paper, we present quantum algorithms for the problem and show for the first time that potential quantum advantages are possible for problems of online convex optimization. Specifically, our contributions are as follows. (i) When the player is allowed to query zeroth-order oracles $O(1)$ times in each round as feedback, we give a quantum algorithm that achieves $O(sqrt{T})$ regret without additional dependence of the dimension $n$, which outperforms the already known optimal classical algorithm only achieving $O(sqrt{nT})$ regret. Note that the regret of our quantum algorithm has achieved the lower bound of classical first-order methods. (ii) We show that for strongly convex loss functions, the quantum algorithm can achieve $O(log T)$ regret with $O(1)$ queries as well, which means that the quantum algorithm can achieve the same regret bound as the classical algorithms in the full information setting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا