ترغب بنشر مسار تعليمي؟ اضغط هنا

Hierarchical Orthogonal Matrix Generation and Matrix-Vector Multiplications in Rigid Body Simulations

186   0   0.0 ( 0 )
 نشر من قبل Fuhui Fang
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we apply the hierarchical modeling technique and study some numerical linear algebra problems arising from the Brownian dynamics simulations of biomolecular systems where molecules are modeled as ensembles of rigid bodies. Given a rigid body $p$ consisting of $n$ beads, the $6 times 3n$ transformation matrix $Z$ that maps the force on each bead to $p$s translational and rotational forces (a $6times 1$ vector), and $V$ the row space of $Z$, we show how to explicitly construct the $(3n-6) times 3n$ matrix $tilde{Q}$ consisting of $(3n-6)$ orthonormal basis vectors of $V^{perp}$ (orthogonal complement of $V$) using only $mathcal{O}(n log n)$ operations and storage. For applications where only the matrix-vector multiplications $tilde{Q}{bf v}$ and $tilde{Q}^T {bf v}$ are needed, we introduce asymptotically optimal $mathcal{O}(n)$ hierarchical algorithms without explicitly forming $tilde{Q}$. Preliminary numerical results are presented to demonstrate the performance and accuracy of the numerical algorithms.



قيم البحث

اقرأ أيضاً

Hierarchical matrices are space and time efficient representations of dense matrices that exploit the low rank structure of matrix blocks at different levels of granularity. The hierarchically low rank block partitioning produces representations that can be stored and operated on in near-linear complexity instead of the usual polynomial complexity of dense matrices. In this paper, we present high performance implementations of matrix vector multiplication and compression operations for the $mathcal{H}^2$ variant of hierarchical matrices on GPUs. This variant exploits, in addition to the hierarchical block partitioning, hierarchical bases for the block representations and results in a scheme that requires only $O(n)$ storage and $O(n)$ complexity for the mat-vec and compression kernels. These two operations are at the core of algebraic operations for hierarchical matrices, the mat-vec being a ubiquitous operation in numerical algorithms while compression/recompression represents a key building block for other algebraic operations, which require periodic recompression during execution. The difficulties in developing efficient GPU algorithms come primarily from the irregular tree data structures that underlie the hierarchical representations, and the key to performance is to recast the computations on flattened trees in ways that allow batched linear algebra operations to be performed. This requires marshaling the irregularly laid out data in a way that allows them to be used by the batched routines. Marshaling operations only involve pointer arithmetic with no data movement and as a result have minimal overhead. Our numerical results on covariance matrices from 2D and 3D problems from spatial statistics show the high efficiency our routines achieve---over 550GB/s for the bandwidth-limited mat-vec and over 850GFLOPS/s in sustained performance for the compression on the P100 Pascal GPU.
We introduce a data distribution scheme for $mathcal{H}$-matrices and a distributed-memory algorithm for $mathcal{H}$-matrix-vector multiplication. Our data distribution scheme avoids an expensive $Omega(P^2)$ scheduling procedure used in previous wo rk, where $P$ is the number of processes, while data balancing is well-preserved. Based on the data distribution, our distributed-memory algorithm evenly distributes all computations among $P$ processes and adopts a novel tree-communication algorithm to reduce the latency cost. The overall complexity of our algorithm is $OBig(frac{N log N}{P} + alpha log P + beta log^2 P Big)$ for $mathcal{H}$-matrices under weak admissibility condition, where $N$ is the matrix size, $alpha$ denotes the latency, and $beta$ denotes the inverse bandwidth. Numerically, our algorithm is applied to address both two- and three-dimensional problems of various sizes among various numbers of processes. On thousands of processes, good parallel efficiency is still observed.
Hyperspectral image (HSI) has some advantages over natural image for various applications due to the extra spectral information. During the acquisition, it is often contaminated by severe noises including Gaussian noise, impulse noise, deadlines, and stripes. The image quality degeneration would badly effect some applications. In this paper, we present a HSI restoration method named smooth and robust low rank tensor recovery. Specifically, we propose a structural tensor decomposition in accordance with the linear spectral mixture model of HSI. It decomposes a tensor into sums of outer matrix vector products, where the vectors are orthogonal due to the independence of endmember spectrums. Based on it, the global low rank tensor structure can be well exposited for HSI denoising. In addition, the 3D anisotropic total variation is used for spatial spectral piecewise smoothness of HSI. Meanwhile, the sparse noise including impulse noise, deadlines and stripes, is detected by the l1 norm regularization. The Frobenius norm is used for the heavy Gaussian noise in some real world scenarios. The alternating direction method of multipliers is adopted to solve the proposed optimization model, which simultaneously exploits the global low rank property and the spatial spectral smoothness of the HSI. Numerical experiments on both simulated and real data illustrate the superiority of the proposed method in comparison with the existing ones.
124 - Antoine Ledent , Rodrigo Alves , 2020
We propose orthogonal inductive matrix completion (OMIC), an interpretable approach to matrix completion based on a sum of multiple orthonormal side information terms, together with nuclear-norm regularization. The approach allows us to inject prio r knowledge about the singular vectors of the ground truth matrix. We optimize the approach by a provably converging algorithm, which optimizes all components of the model simultaneously. We study the generalization capabilities of our method in both the distribution-free setting and in the case where the sampling distribution admits uniform marginals, yielding learning guarantees that improve with the quality of the injected knowledge in both cases. As particular cases of our framework, we present models which can incorporate user and item biases or community information in a joint and additive fashion. We analyse the performance of OMIC on several synthetic and real datasets. On synthetic datasets with a sliding scale of user bias relevance, we show that OMIC better adapts to different regimes than other methods. On real-life datasets containing user/items recommendations and relevant side information, we find that OMIC surpasses the state-of-the-art, with the added benefit of greater interpretability.
98 - M.M. Tung , L. Soler , E. Defez 2007
We discuss the direct use of cubic-matrix splines to obtain continuous approximations to the unique solution of matrix models of the type $Y(x) = f(x,Y(x))$. For numerical illustration, an estimation of the approximation error, an algorithm for its implementation, and an example are given.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا