ترغب بنشر مسار تعليمي؟ اضغط هنا

A Quasi-Random Approach to Matrix Spectral Analysis

186   0   0.0 ( 0 )
 نشر من قبل Lior Eldar
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Inspired by the quantum computing algorithms for Linear Algebra problems [HHL,TaShma] we study how the simulation on a classical computer of this type of Phase Estimation algorithms performs when we apply it to solve the Eigen-Problem of Hermitian matrices. The result is a completely new, efficient and stable, parallel algorithm to compute an approximate spectral decomposition of any Hermitian matrix. The algorithm can be implemented by Boolean circuits in $O(log^2 n)$ parallel time with a total cost of $O(n^{omega+1})$ Boolean operations. This Boolean complexity matches the best known rigorous $O(log^2 n)$ parallel time algorithms, but unlike those algorithms our algorithm is (logarithmically) stable, so further improvements may lead to practical implementations. All previous efficient and rigorous approaches to solve the Eigen-Problem use randomization to avoid bad condition as we do too. Our algorithm makes further use of randomization in a completely new way, taking random powers of a unitary matrix to randomize the phases of its eigenvalues. Proving that a tiny Gaussian perturbation and a random polynomial power are sufficient to ensure almost pairwise independence of the phases $(mod (2pi))$ is the main technical contribution of this work. This randomization enables us, given a Hermitian matrix with well separated eigenvalues, to sample a random eigenvalue and produce an approximate eigenvector in $O(log^2 n)$ parallel time and $O(n^omega)$ Boolean complexity. We conjecture that further improvements of our method can provide a stable solution to the full approximate spectral decomposition problem with complexity similar to the complexity (up to a logarithmic factor) of sampling a single eigenvector.



قيم البحث

اقرأ أيضاً

157 - Dehua Cheng , Yu Cheng , Yan Liu 2015
We consider a fundamental algorithmic question in spectral graph theory: Compute a spectral sparsifier of random-walk matrix-polynomial $$L_alpha(G)=D-sum_{r=1}^dalpha_rD(D^{-1}A)^r$$ where $A$ is the adjacency matrix of a weighted, undirected graph, $D$ is the diagonal matrix of weighted degrees, and $alpha=(alpha_1...alpha_d)$ are nonnegative coefficients with $sum_{r=1}^dalpha_r=1$. Recall that $D^{-1}A$ is the transition matrix of random walks on the graph. The sparsification of $L_alpha(G)$ appears to be algorithmically challenging as the matrix power $(D^{-1}A)^r$ is defined by all paths of length $r$, whose precise calculation would be prohibitively expensive. In this paper, we develop the first nearly linear time algorithm for this sparsification problem: For any $G$ with $n$ vertices and $m$ edges, $d$ coefficients $alpha$, and $epsilon > 0$, our algorithm runs in time $O(d^2mlog^2n/epsilon^{2})$ to construct a Laplacian matrix $tilde{L}=D-tilde{A}$ with $O(nlog n/epsilon^{2})$ non-zeros such that $tilde{L}approx_{epsilon}L_alpha(G)$. Matrix polynomials arise in mathematical analysis of matrix functions as well as numerical solutions of matrix equations. Our work is particularly motivated by the algorithmic problems for speeding up the classic Newtons method in applications such as computing the inverse square-root of the precision matrix of a Gaussian random field, as well as computing the $q$th-root transition (for $qgeq1$) in a time-reversible Markov model. The key algorithmic step for both applications is the construction of a spectral sparsifier of a constant degree random-walk matrix-polynomials introduced by Newtons method. Our algorithm can also be used to build efficient data structures for effective resistances for multi-step time-reversible Markov models, and we anticipate that it could be useful for other tasks in network analysis.
Correlation and similarity measures are widely used in all the areas of sciences and social sciences. Often the variables are not numbers but are instead qualitative descriptors called categorical data. We define and study similarity matrix, as a mea sure of similarity, for the case of categorical data. This is of interest due to a deluge of categorical data, such as movie ratings, top-10 rankings and data from social media, in the public domain that require analysis. We show that the statistical properties of the spectra of similarity matrices, constructed from categorical data, follow those from random matrix theory. We demonstrate this approach by applying it to the data of Indian general elections and sea level pressures in North Atlantic ocean.
126 - Milan Krbalek , Petr Seba 2008
Using the methods originally developed for Random Matrix Theory we derive an exact mathematical formula for number variance (introduced in [4]) describing a rigidity of particle ensembles with power-law repulsion. The resulting relation is consequent ly compared with the relevant statistics of the single-vehicle data measured on the Dutch freeway A9. The detected value of an inverse temperature, which can be identified as a coefficient of a mental strain of car drivers, is then discussed in detail with the respect to the traffic density and flow.
This paper presents a novel approach to characterize the dynamics of the limit spectrum of large random matrices. This approach is based upon the notion we call spectral dominance. In particular, we show that the limit spectral measure can be determi ned as the derivative of the unique viscosity solution of a partial integro-differential equation. This also allows to make general and short proofs for the convergence problem. We treat the cases of Dyson Brownian motions, Wishart processes and present a general class of models for which this characterization holds.
We analyze complex networks under random matrix theory framework. Particularly, we show that $Delta_3$ statistic, which gives information about the long range correlations among eigenvalues, provides a qualitative measure of randomness in networks. A s networks deviate from the regular structure, $Delta_3$ follows random matrix prediction of linear behavior, in semi-logarithmic scale with the slope of $1/pi^2$, for the longer scale.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا