ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Sampling Algorithms for Block Matrix Multiplication

150   0   0.0 ( 0 )
 نشر من قبل Hanyu Li Dr.
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we investigate the randomized algorithms for block matrix multiplication from random sampling perspective. Based on the A-optimal design criterion, the optimal sampling probabilities and sampling block sizes are obtained. To improve the practicability of the block sizes, two modified ones with less computation cost are provided. With respect to the second one, a two step algorithm is also devised. Moreover, the probability error bounds for the proposed algorithms are given. Extensive numerical results show that our methods outperform the existing one in the literature.



قيم البحث

اقرأ أيضاً

98 - Yanjun Zhang , Hanyu Li 2020
The sampling Kaczmarz-Motzkin (SKM) method is a generalization of the randomized Kaczmarz and Motzkin methods. It first samples some rows of coefficient matrix randomly to build a set and then makes use of the maximum violation criterion within this set to determine a constraint. Finally, it makes progress by enforcing this single constraint. In this paper, on the basis of the framework of the SKM method and considering the greedy strategies, we present two block sampling Kaczmarz-Motzkin methods for consistent linear systems. Specifically, we also first sample a subset of rows of coefficient matrix and then determine an index in this set using the maximum violation criterion. Unlike the SKM method, in the rest of the block methods, we devise different greedy strategies to build index sets. Then, the new methods make progress by enforcing the corresponding multiple constraints simultaneously. Theoretical analyses demonstrate that these block methods converge at least as quickly as the SKM method, and numerical experiments show that, for the same accuracy, our methods outperform the SKM method in terms of the number of iterations and computing time.
Given a function $uin L^2=L^2(D,mu)$, where $Dsubset mathbb R^d$ and $mu$ is a measure on $D$, and a linear subspace $V_nsubset L^2$ of dimension $n$, we show that near-best approximation of $u$ in $V_n$ can be computed from a near-optimal budget of $Cn$ pointwise evaluations of $u$, with $C>1$ a universal constant. The sampling points are drawn according to some random distribution, the approximation is computed by a weighted least-squares method, and the error is assessed in expected $L^2$ norm. This result improves on the results in [6,8] which require a sampling budget that is sub-optimal by a logarithmic factor, thanks to a sparsification strategy introduced in [17,18]. As a consequence, we obtain for any compact class $mathcal Ksubset L^2$ that the sampling number $rho_{Cn}^{rm rand}(mathcal K)_{L^2}$ in the randomized setting is dominated by the Kolmogorov $n$-width $d_n(mathcal K)_{L^2}$. While our result shows the existence of a randomized sampling with such near-optimal properties, we discuss remaining issues concerning its generation by a computationally efficient algorithm.
78 - Hanyu Li , Yajie Yu 2021
This work considers the problem of computing the CANDECOMP/PARAFAC (CP) decomposition of large tensors. One popular way is to translate the problem into a sequence of overdetermined least squares subproblems with Khatri-Rao product (KRP) structure. I n this work, for tensor with different levels of importance in each fiber, combining stochastic optimization with randomized sampling, we present a mini-batch stochastic gradient descent algorithm with importance sampling for those special least squares subproblems. Four different sampling strategies are provided. They can avoid forming the full KRP or corresponding probabilities and sample the desired fibers from the original tensor directly. Moreover, a more practical algorithm with adaptive step size is also given. For the proposed algorithms, we present their convergence properties and numerical performance. The results on synthetic data show that our algorithms outperform the existing algorithms in terms of accuracy or the number of iterations.
Reduced model spaces, such as reduced basis and polynomial chaos, are linear spaces $V_n$ of finite dimension $n$ which are designed for the efficient approximation of families parametrized PDEs in a Hilbert space $V$. The manifold $mathcal{M}$ that gathers the solutions of the PDE for all admissible parameter values is globally approximated by the space $V_n$ with some controlled accuracy $epsilon_n$, which is typically much smaller than when using standard approximation spaces of the same dimension such as finite elements. Reduced model spaces have also been proposed in [13] as a vehicle to design a simple linear recovery algorithm of the state $uinmathcal{M}$ corresponding to a particular solution when the values of parameters are unknown but a set of data is given by $m$ linear measurements of the state. The measurements are of the form $ell_j(u)$, $j=1,dots,m$, where the $ell_j$ are linear functionals on $V$. The analysis of this approach in [2] shows that the recovery error is bounded by $mu_nepsilon_n$, where $mu_n=mu(V_n,W)$ is the inverse of an inf-sup constant that describe the angle between $V_n$ and the space $W$ spanned by the Riesz representers of $(ell_1,dots,ell_m)$. A reduced model space which is efficient for approximation might thus be ineffective for recovery if $mu_n$ is large or infinite. In this paper, we discuss the existence and construction of an optimal reduced model space for this recovery method, and we extend our search to affine spaces. Our basic observation is that this problem is equivalent to the search of an optimal affine algorithm for the recovery of $mathcal{M}$ in the worst case error sense. This allows us to perform our search by a convex optimization procedure. Numerical tests illustrate that the reduced model spaces constructed from our approach perform better than the classical reduced basis spaces.
The eigenvectors of the particle number operator in second quantization are characterized by the block sparsity of their matrix product state representations. This is shown to generalize to other classes of operators. Imposing block sparsity yields a scheme for conserving the particle number that is commonly used in applications in physics. Operations on such block structures, their rank truncation, and implications for numerical algorithms are discussed. Explicit and rank-reduced matrix product operator representations of one- and two-particle operators are constructed that operate only on the non-zero blocks of matrix product states.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا