ترغب بنشر مسار تعليمي؟ اضغط هنا

An efficient algorithm for global interval solution of nonlinear algebraic equations and its GPGPU implementation

116   0   0.0 ( 0 )
 نشر من قبل Liangyu Chen
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Solving nonlinear algebraic equations is a classic mathematics problem, and common in scientific researches and engineering applications. There are many numeric, symbolic and numeric-symbolic methods of solving (real) solutions. Unlucky, these methods are constrained by some factors, e.g., high complexity, slow serial calculation, and the notorious intermediate expression expansion. Especially when the count of variables is larger than six, the efficiency is decreasing drastically. In this paper, according to the property of physical world, we pay attention to nonlinear algebraic equations whose variables are in fixed constraints, and get meaningful real solutions. Combining with parallelism of GPGPU, we present an efficient algorithm, by searching the solution space globally and solving the nonlinear algebraic equations with real interval solutions. Furthermore, we realize the Hansen-Sengupta method on GPGPU. The experiments show that our method can solve many nonlinear algebraic equations, and the results are accurate and more efficient compared to traditional serial methods.

قيم البحث

اقرأ أيضاً

384 - Shoupu Wan 2020
Manachers algorithm has been shown to be optimal to the longest palindromic substring problem. Many of the existing implementations of this algorithm, however, unanimously required in-memory construction of an augmented string that is twice as long a s the original string. Although it has found widespread use, we found that this preprocessing is neither economic nor necessary. We present a more efficient implementation of Manachers algorithm based on index mapping that makes the string augmentation process obsolete.
This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) reg ularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings.
While there has been extensive previous work on efficient quantum algorithms for linear differential equations, analogous progress for nonlinear differential equations has been severely limited due to the linearity of quantum mechanics. Despite this obstacle, we develop a quantum algorithm for initial value problems described by dissipative quadratic $n$-dimensional ordinary differential equations. Assuming $R < 1$, where $R$ is a parameter characterizing the ratio of the nonlinearity to the linear dissipation, this algorithm has complexity $T^2mathrm{poly}(log T, log n, log 1/epsilon)/epsilon$, where $T$ is the evolution time and $epsilon$ is the allowed error in the output quantum state. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in $T$. We achieve this improvement using the method of Carleman linearization, for which we give a novel convergence theorem. This method maps a system of nonlinear differential equations to an infinite-dimensional system of linear differential equations, which we discretize, truncate, and solve using the forward Euler method and the quantum linear system algorithm. We also provide a lower bound on the worst-case complexity of quantum algorithms for general quadratic differential equations, showing that the problem is intractable for $R ge sqrt{2}$. Finally, we discuss potential applications of this approach to problems arising in biology as well as in fluid and plasma dynamics.
In this paper, we give an algebraic construction of the solution to the following mean field equation $$ Delta psi+e^{psi}=4pisum_{i=1}^{2g+2}delta_{P_{i}}, $$ on a genus $ggeq 2$ hyperelliptic curve $(X,ds^{2})$ where $ds^{2}$ is a canonical metric on $X$ and ${P_{1},cdots,P_{2g+2}}$ is the set of Weierstrass points on $X.$ Furthermore, we study the rescaled equation $$ Delta psi+gamma e^{psi}=4pigamma sum_{i=1}^{2g+2}delta_{P_{i}} $$ and its adiabatic limit at $gamma=0$.
Biclustering is a data mining technique which searches for local patterns in numeric tabular data with main application in bioinformatics. This technique has shown promise in multiple areas, including development of biomarkers for cancer, disease sub type identification, or gene-drug interactions among others. In this paper we introduce EBIC.JL - an implementation of one of the most accurate biclustering algorithms in Julia, a modern highly parallelizable programming language for data science. We show that the new version maintains comparable accuracy to its predecessor EBIC while converging faster for the majority of the problems. We hope that this open source software in a high-level programming language will foster research in this promising field of bioinformatics and expedite development of new biclustering methods for big data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا