ترغب بنشر مسار تعليمي؟ اضغط هنا

Computing the Action of Trigonometric and Hyperbolic Matrix Functions

40   0   0.0 ( 0 )
 نشر من قبل Peter Kandolf
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We derive a new algorithm for computing the action $f(A)V$ of the cosine, sine, hyperbolic cosine, and hyperbolic sine of a matrix $A$ on a matrix $V$, without first computing $f(A)$. The algorithm can compute $cos(A)V$ and $sin(A)V$ simultaneously, and likewise for $cosh(A)V$ and $sinh(A)V$, and it uses only real arithmetic when $A$ is real. The algorithm exploits an existing algorithm texttt{expmv} of Al-Mohy and Higham for $mathrm{e}^AV$ and its underlying backward error analysis. Our experiments show that the new algorithm performs in a forward stable manner and is generally significantly faster than alternatives based on multiple invocations of texttt{expmv} through formulas such as $cos(A)V = (mathrm{e}^{mathrm{i}A}V + mathrm{e}^{mathrm{-i}A}V)/2$.

قيم البحث

اقرأ أيضاً

It is shown that generalized trigonometric functions and generalized hyperbolic functions can be transformed from each other. As an application of this transformation, a number of properties for one immediately lead to the corresponding properties fo r the other. In this way, Mitrinovi{c}-Adamovi{c}-type inequalities, multiple-angle formulas, and double-angle formulas for both can be produced.
99 - V.N. Temlyakov 2015
The paper gives a constructive method, based on greedy algorithms, that provides for the classes of functions with small mixed smoothness the best possible in the sense of order approximation error for the $m$-term approximation with respect to the trigonometric system.
We investigate the problem of approximating the matrix function $f(A)$ by $r(A)$, with $f$ a Markov function, $r$ a rational interpolant of $f$, and $A$ a symmetric Toeplitz matrix. In a first step, we obtain a new upper bound for the relative interp olation error $1-r/f$ on the spectral interval of $A$. By minimizing this upper bound over all interpolation points, we obtain a new, simple and sharp a priori bound for the relative interpolation error. We then consider three different approaches of representing and computing the rational interpolant $r$. Theoretical and numerical evidence is given that any of these methods for a scalar argument allows to achieve high precision, even in the presence of finite precision arithmetic. We finally investigate the problem of efficiently evaluating $r(A)$, where it turns out that the relative error for a matrix argument is only small if we use a partial fraction decomposition for $r$ following Antoulas and Mayo. An important role is played by a new stopping criterion which ensures to automatically find the degree of $r$ leading to a small error, even in presence of finite precision arithmetic.
We consider the matrix representation of the Eisenstein numbers and in this context we discuss the theory of the Pseudo Hyperbolic Functions. We develop a geometrical interpretation and show the usefulness of the method in Physical problems related to the anomalous scattering of light by crystals
The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا