Do you want to publish a course? Click here

Solving the Model of the Risk of Microcephaly Induced by the Zika Virus (ZIKV) by a Modified Moving least Squares Method

57   0   0.0 ( 0 )
 Publication date 2016
  fields
and research's language is English




Ask ChatGPT about the research

The aim of this work is the application of the Meshfree methods for solving systems of stiff ordinary differential equations. These methods are based on the Moving least squares (MLS), generalized moving least squares (GMLS) approximation and Modified Moving least squares (MMLS) method. GMLS makes a considerable reduction in the cost of numerical methods. In fact, GMLS method is effect operator on the basis polynomial rather than the complicated MLS shape functions. Besides that the modified MMLS approximation method avoids undue a singular moment matrix. This allows the base functions to be of order greater than two with the same size of the support domain as the linear base functions. We also show the estimation of the error propagation obtained of the numerical solution of the systems of stiff ordinary differential equation. Some examples are provided to show that the GMLS and MMLS methods are more reliable (accurate) than classic MLS method.Finally, the (our) proposed methods are validated by solving ZIKV model which is a system of ODEs.



rate research

Read More

Consider using the right-preconditioned generalized minimal residual (AB-GMRES) method, which is an efficient method for solving underdetermined least squares problems. Morikuni (Ph.D. thesis, 2013) showed that for some inconsistent and ill-conditioned problems, the iterates of the AB-GMRES method may diverge. This is mainly because the Hessenberg matrix in the GMRES method becomes very ill-conditioned so that the backward substitution of the resulting triangular system becomes numerically unstable. We propose a stabilized GMRES based on solving the normal equations corresponding to the above triangular system using the standard Cholesky decomposition. This has the effect of shifting upwards the tiny singular values of the Hessenberg matrix which lead to an inaccurate solution. Thus, the process becomes numerically stable and the system becomes consistent, rendering better convergence and a more accurate solution. Numerical experiments show that the proposed method is robust and efficient for solving inconsistent and ill-conditioned underdetermined least squares problems. The method can be considered as a way of making the GMRES stable for highly ill-conditioned inconsistent problems.
Recently, collocation based radial basis function (RBF) partition of unity methods (PUM) for solving partial differential equations have been formulated and investigated numerically and theoretically. When combined with stable evaluation methods such as the RBF-QR method, high order convergence rates can be achieved and sustained under refinement. However, some numerical issues remain. The method is sensitive to the node layout, and condition numbers increase with the refinement level. Here, we propose a modified formulation based on least squares approximation. We show that the sensitivity to node layout is removed and that conditioning can be controlled through oversampling. We derive theoretical error estimates both for the collocation and least squares RBF-PUM. Numerical experiments are performed for the Poisson equation in two and three space dimensions for regular and irregular geometries. The convergence experiments confirm the theoretical estimates, and the least squares formulation is shown to be 5-10 times faster than the collocation formulation for the same accuracy.
103 - C.K. Li , Y.T. Poon , 2008
Let $S(A)$ denote the orbit of a complex or real matrix $A$ under a certain equivalence relation such as unitary similarity, unitary equivalence, unitary congruences etc. Efficient gradient-flow algorithms are constructed to determine the best approximation of a given matrix $A_0$ by the sum of matrices in $S(A_1), ..., S(A_N)$ in the sense of finding the Euclidean least-squares distance $$min {|X_1+ ... + X_N - A_0|: X_j in S(A_j), j = 1, >..., N}.$$ Connections of the results to different pure and applied areas are discussed.
215 - Yanjun Zhang , Hanyu Li 2020
We present a novel greedy Gauss-Seidel method for solving large linear least squares problem. This method improves the greedy randomized coordinate descent (GRCD) method proposed recently by Bai and Wu [Bai ZZ, and Wu WT. On greedy randomized coordinate descent methods for solving large linear least-squares problems. Numer Linear Algebra Appl. 2019;26(4):1--15], which in turn improves the popular randomized Gauss-Seidel method. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the GRCD method in term of the computing time.
61 - Barak Sober , David Levin 2016
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $mathbb{R}^n$ ($d ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located near the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا