ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximated structured pseudospectra

103   0   0.0 ( 0 )
 نشر من قبل Silvia Noschese
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Pseudospectra and structured pseudospectra are important tools for the analysis of matrices. Their computation, however, can be very demanding for all but small matrices. A new approach to compute approximations of pseudospectra and structured pseudospectra, based on determining the spectra of many suitably chosen rank-one or projected rank-one perturbations of the given matrix is proposed. The choice of rank-one or projected rank-one perturbations is inspired by Wilkinsons analysis of eigenvalue sensitivity. Numerical examples illustrate that the proposed approach gives much better insight into the pseudospectra and structured pseudospectra than random or structured random rank-one perturbations with lower computational burden. The latter approach is presently commonly used for the determination of structured pseudospectra.



قيم البحث

اقرأ أيضاً

This manuscripts contains the proofs for A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction.
We study two techniques for correcting the geometrical error associated with domain approximation by a polygon. The first was introduced some time ago cite{bramble1972projection} and leads to a nonsymmetric formulation for Poissons equation. We intro duce a new technique that yields a symmetric formulation and has similar performance. We compare both methods on a simple test problem.
We present a class of fast subspace tracking algorithms based on orthogonal iterations for structured matrices/pencils that can be represented as small rank perturbations of unitary matrices. The algorithms rely upon an updated data sparse factorizat ion -- named LFR factorization -- using orthogonal Hessenberg matrices. These new subspace trackers reach a complexity of only $O(nk^2)$ operations per time update, where $n$ and $k$ are the size of the matrix and of the small rank perturbation, respectively.
The goal of the emph{alignment problem} is to align a (given) point cloud $P = {p_1,cdots,p_n}$ to another (observed) point cloud $Q = {q_1,cdots,q_n}$. That is, to compute a rotation matrix $R in mathbb{R}^{3 times 3}$ and a translation vector $t in mathbb{R}^{3}$ that minimize the sum of paired distances $sum_{i=1}^n D(Rp_i-t,q_i)$ for some distance function $D$. A harder version is the emph{registration problem}, where the correspondence is unknown, and the minimum is also over all possible correspondence functions from $P$ to $Q$. Heuristics such as the Iterative Closest Point (ICP) algorithm and its variants were suggested for these problems, but none yield a provable non-trivial approximation for the global optimum. We prove that there emph{always} exists a witness set of $3$ pairs in $P times Q$ that, via novel alignment algorithm, defines a constant factor approximation (in the worst case) to this global optimum. We then provide algorithms that recover this witness set and yield the first provable constant factor approximation for the: (i) alignment problem in $O(n)$ expected time, and (ii) registration problem in polynomial time. Such small witness sets exist for many variants including points in $d$-dimensional space, outlier-resistant cost functions, and different correspondence types. Extensive experimental results on real and synthetic datasets show that our approximation constants are, in practice, close to $1$, and up to x$10$ times smaller than state-of-the-art algorithms.
The task of predicting missing entries of a matrix, from a subset of known entries, is known as textit{matrix completion}. In todays data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matr ix completion includes any setting in which data is not missing uniformly at random. In recent work, a modification to the standard nuclear norm minimization (NNM) for matrix completion has been developed to take into account emph{sparsity-based} structure in the missing entries. This notion of structure is motivated in many settings including recommender systems, where the probability that an entry is observed depends on the value of the entry. We propose adjusting an Iteratively Reweighted Least Squares (IRLS) algorithm for low-rank matrix completion to take into account sparsity-based structure in the missing entries. We also present an iterative gradient-projection-based implementation of the algorithm that can handle large-scale matrices. Finally, we present a robust array of numerical experiments on matrices of varying sizes, ranks, and level of structure. We show that our proposed method is comparable with the adjusted NNM on small-sized matrices, and often outperforms the IRLS algorithm in structured settings on matrices up to size $1000 times 1000$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا