ترغب بنشر مسار تعليمي؟ اضغط هنا

A Robust Hierarchical Solver for Ill-conditioned Systems with Applications to Ice Sheet Modeling

126   0   0.0 ( 0 )
 نشر من قبل Chao Chen
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

A hierarchical solver is proposed for solving sparse ill-conditioned linear systems in parallel. The solver is based on a modification of the LoRaSp method, but employs a deferred-compression technique, which provably reduces the approximation error and significantly improves efficiency. Moreover, the deferred-compression technique introduces minimal overhead and does not affect parallelism. As a result, the new solver achieves linear computational complexity under mild assumptions and excellent parallel scalability. To demonstrate the performance of the new solver, we focus on applying it to solve sparse linear systems arising from ice sheet modeling. The strong anisotropic phenomena associated with the thin structure of ice sheets creates serious challenges for existing solvers. To address the anisotropy, we additionally developed a customized partitioning scheme for the solver, which captures the strong-coupling direction accurately. In general, the partitioning can be computed algebraically with existing software packages, and thus the new solver is generalizable for solving other sparse linear systems. Our results show that ice sheet problems of about 300 million degrees of freedom have been solved in just a few minutes using a thousand processors.



قيم البحث

اقرأ أيضاً

We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits th e low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We present various numerical results to demonstrate the versatility and scalability of the parallel algorithm.
Classical iterative algorithms for linear system solving and regression are brittle to the condition number of the data matrix. Even a semi-random adversary, constrained to only give additional consistent information, can arbitrarily hinder the resul ting computational guarantees of existing solvers. We show how to overcome this barrier by developing a framework which takes state-of-the-art solvers and robustifies them to achieve comparable guarantees against a semi-random adversary. Given a matrix which contains an (unknown) well-conditioned submatrix, our methods obtain computational and statistical guarantees as if the entire matrix was well-conditioned. We complement our theoretical results with preliminary experimental evidence, showing that our methods are effective in practice.
This paper is concerned with solving ill-posed tensor linear equations. These kinds of equations may appear from finite difference discretization of high-dimensional convection-diffusion problems or when partial differential equations in many dimensi ons are discretized by collocation spectral methods. Here, we propose the Tensor Golub--Kahan bidiagonalization (TGKB) algorithm in conjunction with the well known Tikhonov regularization method to solve the mentioned problems. Theoretical results are presented to discuss on conditioning of the Stein tensor equation and to reveal that how the TGKB process can be exploited for general tensor equations. In the last section, some classical test problems are examined to numerically illustrate the feasibility of proposed algorithms and also applications for color image restoration are considered.
Automatic Speech Scoring (ASS) is the computer-assisted evaluation of a candidates speaking proficiency in a language. ASS systems face many challenges like open grammar, variable pronunciations, and unstructured or semi-structured content. Recent de ep learning approaches have shown some promise in this domain. However, most of these approaches focus on extracting features from a single audio, making them suffer from the lack of speaker-specific context required to model such a complex task. We propose a novel deep learning technique for non-native ASS, called speaker-conditioned hierarchical modeling. In our technique, we take advantage of the fact that oral proficiency tests rate multiple responses for a candidate. We extract context vectors from these responses and feed them as additional speaker-specific context to our network to score a particular response. We compare our technique with strong baselines and find that such modeling improves the models average performance by 6.92% (maximum = 12.86%, minimum = 4.51%). We further show both quantitative and qualitative insights into the importance of this additional context in solving the problem of ASS.
We present a method for performing sampling from a Boltzmann distribution of an ill-conditioned quadratic action. This method is based on heatbath thermalization along a set of conjugate directions, generated via a conjugate-gradient procedure. The r esulting scheme outperforms local updates for matrices with very high condition number, since it avoids the slowing down of modes with lower eigenvalue, and has some advantages over the global heatbath approach, compared to which it is more stable and allows for more freedom in devising case-specific optimizations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا