ترغب بنشر مسار تعليمي؟ اضغط هنا

Linear Algebra in the vector space of intervals

108   0   0.0 ( 0 )
 نشر من قبل Nicolas Goze
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Nicolas Goze




اسأل ChatGPT حول البحث

In a previous paper, we have given an algebraic model to the set of intervals. Here, we apply this model in a linear frame. We define a notion of diagonalization of square matrices whose coefficients are intervals. But in this case, with respect to the real case, a matrix of order $n$ could have more than $n$ eigenvalues (the set of intervals is not factorial). We consider a notion of central eigenvalues permits to describe criterium of diagonalization. As application, we define a notion of Exponential mapping.



قيم البحث

اقرأ أيضاً

77 - Jun Lu 2021
This survey is meant to provide an introduction to the fundamental theorem of linear algebra and the theories behind them. Our goal is to give a rigorous introduction to the readers with prior exposure to linear algebra. Specifically, we provide some details and proofs of some results from (Strang, 1993). We then describe the fundamental theorem of linear algebra from different views and find the properties and relationships behind the views. The fundamental theorem of linear algebra is essential in many fields, such as electrical engineering, computer science, machine learning, and deep learning. This survey is primarily a summary of purpose, significance of important theories behind it. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in theory behind the fundamental theorem of linear algebra and rigorous analysis in order to seamlessly introduce its properties in four subspaces in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results and given the paucity of scope to present this discussion, e.g., the separated analysis of the (orthogonal) projection matrices. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields. Some excellent examples include (Rose, 1982; Strang, 2009; Trefethen and Bau III, 1997; Strang, 2019, 2021).
168 - Tomoaki Okayama 2013
The Sinc quadrature and the Sinc indefinite integration are approximation formulas for definite integration and indefinite integration, respectively, which can be applied on any interval by using an appropriate variable transformation. Their converge nce rates have been analyzed for typical cases including finite, semi-infinite, and infinite intervals. In addition, for verified automatic integration, more explicit error bounds that are computable have been recently given on a finite interval. In this paper, such explicit error bounds are given in the remaining cases on semi-infinite and infinite intervals.
57 - K.S. Ryutin 2019
In this paper we continue the studies on the integer sparse recovery problem that was introduced in cite{FKS} and studied in cite{K},cite{KS}. We provide an algorithm for the recovery of an unknown sparse integer vector for the measurement matrix des cribed in cite{KS} and estimate the number of arithmetical operations.
In this paper we derive stability estimates in $L^{2}$- and $L^{infty}$- based Sobolev spaces for the $L^{2}$ projection and a family of quasiinterolants in the space of smooth, 1-periodic, polynomial splines defined on a uniform mesh in $[0,1]$. As a result of the assumed periodicity and the uniform mesh, cyclic matrix techniques and suitable decay estimates of the elements of the inverse of a Gram matrix associated with the standard basis of the space of splines, are used to establish the stability results.
Unless special conditions apply, the attempt to solve ill-conditioned systems of linear equations with standard numerical methods leads to uncontrollably high numerical error. Often, such systems arise from the discretization of operator equations wi th a large number of discrete variables. In this paper we show that the accuracy can be improved significantly if the equation is transformed before discretization, a process we call full operator preconditioning (FOP). It bears many similarities with traditional preconditioning for iterative methods but, crucially, transformations are applied at the operator level. We show that while condition-number improvements from traditional preconditioning generally do not improve the accuracy of the solution, FOP can. A number of topics in numerical analysis can be interpreted as implicitly employing FOP; we highlight (i) Chebyshev interpolation in polynomial approximation, and (ii) Olver-Townsends spectral method, both of which produce solutions of dramatically improved accuracy over a naive problem formulation. In addition, we propose a FOP preconditioner based on integration for the solution of fourth-order differential equations with the finite-element method, showing the resulting linear system is well-conditioned regardless of the discretization size, and demonstrate its error-reduction capabilities on several examples. This work shows that FOP can improve accuracy beyond the standard limit for both direct and iterative methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا