ترغب بنشر مسار تعليمي؟ اضغط هنا

Decompositions and coalescing eigenvalues of symmetric definite pencils depending on parameters

88   0   0.0 ( 0 )
 نشر من قبل Alessandro Pugliese
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we consider symmetric positive definite pencils depending on two parameters. That is, we are concerned with the generalized eigenvalue problem $A(x)-lambda B(x)$, where $A$ and $B$ are symmetric matrix valued functions in ${mathbb R}^{ntimes n}$, smoothly depending on parameters $xin Omegasubset {mathbb R}^2$; further, $B$ is also positive definite. In general, the eigenvalues of this multiparameter problem will not be smooth, the lack of smoothness resulting from eigenvalues being equal at some parameter values (conical intersections). We first give general theoretical results on the smoothness of eigenvalues and eigenvectors for the present generalized eigenvalue problem, and hence for the corresponding projections, and then perform a numerical study of the statistical properties of coalescing eigenvalues for pencils where $A$ and $B$ are either full or banded, for several bandwidths. Our numerical study will be performed with respect to a random matrix ensemble which respects the underlying engineering problems motivating our study.



قيم البحث

اقرأ أيضاً

207 - Jiawang Nie , Ke Ye , Lihong Zhi 2020
This paper discusses the problem of symmetric tensor decomposition on a given variety $X$: decomposing a symmetric tensor into the sum of tensor powers of vectors contained in $X$. In this paper, we first study geometric and algebraic properties of s uch decomposable tensors, which are crucial to the practical computations of such decompositions. For a given tensor, we also develop a criterion for the existence of a symmetric decomposition on $X$. Secondly and most importantly, we propose a method for computing symmetric tensor decompositions on an arbitrary $X$. As a specific application, Vandermonde decompositions for nonsymmetric tensors can be computed by the proposed algorithm.
We present a class of fast subspace tracking algorithms based on orthogonal iterations for structured matrices/pencils that can be represented as small rank perturbations of unitary matrices. The algorithms rely upon an updated data sparse factorizat ion -- named LFR factorization -- using orthogonal Hessenberg matrices. These new subspace trackers reach a complexity of only $O(nk^2)$ operations per time update, where $n$ and $k$ are the size of the matrix and of the small rank perturbation, respectively.
We present the Cholesky-factored symmetric positive definite neural network (SPD-NN) for modeling constitutive relations in dynamical equations. Instead of directly predicting the stress, the SPD-NN trains a neural network to predict the Cholesky fac tor of a tangent stiffness matrix, based on which the stress is calculated in the incremental form. As a result of the special structure, SPD-NN weakly imposes convexity on the strain energy function, satisfies time consistency for path-dependent materials, and therefore improves numerical stability, especially when the SPD-NN is used in finite element simulations. Depending on the types of available data, we propose two training methods, namely direct training for strain and stress pairs and indirect training for loads and displacement pairs. We demonstrate the effectiveness of SPD-NN on hyperelastic, elasto-plastic, and multiscale fiber-reinforced plate problems from solid mechanics. The generality and robustness of the SPD-NN make it a promising tool for a wide range of constitutive modeling applications.
Positive semi-definite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with su ch a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We will show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss some new an old bounds and include several examples where an expected error norm can be computed exactly.
159 - Xin Xing , Hua Huang , Edmond Chow 2020
In an iterative approach for solving linear systems with ill-conditioned, symmetric positive definite (SPD) kernel matrices, both fast matrix-vector products and fast preconditioning operations are required. Fast (linear-scaling) matrix-vector produc ts are available by expressing the kernel matrix in an $mathcal{H}^2$ representation or an equivalent fast multipole method representation. Preconditioning such matrices, however, requires a structured matrix approximation that is more regular than the $mathcal{H}^2$ representation, such as the hierarchically semiseparable (HSS) matrix representation, which provides fast solve operations. Previously, an algorithm was presented to construct an HSS approximation to an SPD kernel matrix that is guaranteed to be SPD. However, this algorithm has quadratic cost and was only designed for recursive binary partitionings of the points defining the kernel matrix. This paper presents a general algorithm for constructing an SPD HSS approximation. Importantly, the algorithm uses the $mathcal{H}^2$ representation of the SPD matrix to reduce its computational complexity from quadratic to quasilinear. Numerical experiments illustrate how this SPD HSS approximation performs as a preconditioner for solving linear systems arising from a range of kernel functions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا