Do you want to publish a course? Click here

Solving differential Riccati equations: A nonlinear space-time method using tensor trains

137   0   0.0 ( 0 )
 Added by Martin Stoll
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Differential algebraic Riccati equations are at the heart of many applications in control theory. They are time-depent, matrix-valued, and in particular nonlinear equations that require special methods for their solution. Low-rank methods have been used heavily computing a low-rank solution at every step of a time-discretization. We propose the use of an all-at-once space-time solution leading to a large nonlinear space-time problem for which we propose the use of a Newton-Kleinman iteration. Approximating the space-time problem in low-rank form requires fewer applications of the discretized differential operator and gives a low-rank approximation to the overall solution.



rate research

Read More

This paper presents a novel semi-analytical collocation method to solve multi-term variable-order time fractional partial differential equations (VOTFPDEs). In the proposed method it employs the Fourier series expansion for spatial discretization, which transforms the original multi-term VOTFPDEs into a sequence of multi-term variable-order time fractional ordinary differential equations (VOTFODEs). Then these VOTFODEs can be solved by using the recent-developed backward substitution method. Several numerical examples verify the accuracy and efficiency of the proposed numerical approach in the solution of multi-term VOTFPDEs.
107 - Quanhui Zhu , Jiang Yang 2021
At present, deep learning based methods are being employed to resolve the computational challenges of high-dimensional partial differential equations (PDEs). But the computation of the high order derivatives of neural networks is costly, and high order derivatives lack robustness for training purposes. We propose a novel approach to solving PDEs with high order derivatives by simultaneously approximating the function value and derivatives. We introduce intermediate variables to rewrite the PDEs into a system of low order differential equations as what is done in the local discontinuous Galerkin method. The intermediate variables and the solutions to the PDEs are simultaneously approximated by a multi-output deep neural network. By taking the residual of the system as a loss function, we can optimize the network parameters to approximate the solution. The whole process relies on low order derivatives. Numerous numerical examples are carried out to demonstrate that our local deep learning is efficient, robust, flexible, and is particularly well-suited for high-dimensional PDEs with high order derivatives.
The aim of the present paper is to introduce a new numerical method for solving nonlinear Volterra integro-differential equations involving delay. We apply trapezium rule to the integral involved in the equation. Further, Daftardar-Gejji and Jafari method (DGJ) is employed to solve the implicit equation. Existence-uniqueness theorem is derived for solutions of such equations and the error and convergence analysis of the proposed method is presented. We illustrate efficacy of the newly proposed method by constructing examples.
122 - Karim Halaseh , Tommi Muller , 2020
In this paper we study the problem of recovering a tensor network decomposition of a given tensor $mathcal{T}$ in which the tensors at the vertices of the network are orthogonally decomposable. Specifically, we consider tensor networks in the form of tensor trains (aka matrix product states). When the tensor train has length 2, and the orthogonally decomposable tensors at the two vertices of the network are symmetric, we show how to recover the decomposition by considering random linear combinations of slices. Furthermore, if the tensors at the vertices are symmetric but not orthogonally decomposable, we show that a whitening procedure can transform the problem into an orthogonal one, thereby yielding a solution for the decomposition of the tensor. When the tensor network has length 3 or more and the tensors at the vertices are symmetric and orthogonally decomposable, we provide an algorithm for recovering them subject to some rank conditions. Finally, in the case of tensor trains of length two in which the tensors at the vertices are orthogonally decomposable but not necessarily symmetric, we show that the decomposition problem reduces to the problem of a novel matrix decomposition, that of an orthogonal matrix multiplied by diagonal matrices on either side. We provide two solutions for the full-rank tensor case using Sinkhorns theorem and Procrustes algorithm, respectively, and show that the Procrustes-based solution can be generalized to any rank case. We conclude with a multitude of open problems in linear and multilinear algebra that arose in our study.
In this work, an $r$-linearly converging adaptive solver is constructed for parabolic evolution equations in a simultaneous space-time variational formulation. Exploiting the product structure of the space-time cylinder, the family of trial spaces that we consider are given as the spans of wavelets-in-time and (locally refined) finite element spaces-in-space. Numerical results illustrate our theoretical findings.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا