ﻻ يوجد ملخص باللغة العربية
Many applications in computational science require computing the elements of a function of a large matrix. A commonly used approach is based on the the evaluation of the eigenvalue decomposition, a task that, in general, involves a computing time that scales with the cube of the size of the matrix. We present here a method that can be used to evaluate the elements of a function of a positive-definite matrix with a scaling that is linear for sparse matrices and quadratic in the general case. This methodology is based on the properties of the dynamics of a multidimensional harmonic potential coupled with colored noise generalized Langevin equation (GLE) thermostats. This $f-$thermostat (FTH) approach allows us to calculate directly elements of functions of a positive-definite matrix by carefully tailoring the properties of the stochastic dynamics. We demonstrate the scaling and the accuracy of this approach for both dense and sparse problems and compare the results with other established methodologies.
We show that a recently introduced stochastic thermostat [J. Chem. Phys. 126 (2007) 014101] can be considered as a global version of the Langevin thermostat. We compare the global scheme and the local one (Langevin) from a formal point of view and th
The Riemannian metric on the manifold of positive definite matrices is defined by a kernel function $phi$ in the form $K_D^phi(H,K)=sum_{i,j}phi(lambda_i,lambda_j)^{-1} Tr P_iHP_jK$ when $sum_ilambda_iP_i$ is the spectral decomposition of the foot po
With view to applications in stochastic analysis and geometry, we introduce a new correspondence for positive definite kernels (p.d.) $K$ and their associated reproducing kernel Hilbert spaces. With this we establish two kinds of factorizations: (i)
Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that
Representations in the form of Symmetric Positive Definite (SPD) matrices have been popularized in a variety of visual learning applications due to their demonstrated ability to capture rich second-order statistics of visual data. There exist several