Do you want to publish a course? Click here

Reduced Basis Greedy Selection Using Random Training Sets

105   0   0.0 ( 0 )
 Added by Wolfgang Dahmen
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in cite{BCDDPW,DPW}, where it is shown that the reduced basis space $V_n$ of dimension $n$, constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov $n$-width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite {em training set} obtained through a discretization. of the parameter domain. To guarantee a final approximation error $varepsilon$ for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy or order $varepsilon$. Hence, the size of the training set is the $varepsilon$ covering number for $mathcal{M}$ and this covering number typically behaves like $exp(Cvarepsilon^{-1/s})$ for some $C>0$ when the solution manifold has $n$-width decay $O(n^{-s})$. Thus, the shear size of the training set prohibits implementation of the algorithm when $varepsilon$ is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in $varepsilon^{-1}$. Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions.



rate research

Read More

Linear kinetic transport equations play a critical role in optical tomography, radiative transfer and neutron transport. The fundamental difficulty hampering their efficient and accurate numerical resolution lies in the high dimensionality of the physical and velocity/angular variables and the fact that the problem is multiscale in nature. Leveraging the existence of a hidden low-rank structure hinted by the diffusive limit, in this work, we design and test the angular-space reduced order model for the linear radiative transfer equation, the first such effort based on the celebrated reduced basis method (RBM). Our method is built upon a high-fidelity solver employing the discrete ordinates method in the angular space, an asymptotic preserving upwind discontinuous Galerkin method for the physical space, and an efficient synthetic accelerated source iteration for the resulting linear system. Addressing the challenge of the parameter values (or angular directions) being coupled through an integration operator, the first novel ingredient of our method is an iterative procedure where the macroscopic density is constructed from the RBM snapshots, treated explicitly and allowing a transport sweep, and then updated afterwards. A greedy algorithm can then proceed to adaptively select the representative samples in the angular space and form a surrogate solution space. The second novelty is a least-squares density reconstruction strategy, at each of the relevant physical locations, enabling the robust and accurate integration over an arbitrarily unstructured set of angular samples toward the macroscopic density. Numerical experiments indicate that our method is highly effective for computational cost reduction in a variety of regimes.
Recently, neural networks have been widely applied for solving partial differential equations. However, the resulting optimization problem brings many challenges for current training algorithms. This manifests itself in the fact that the convergence order that has been proven theoretically cannot be obtained numerically. In this paper, we develop a novel greedy training algorithm for solving PDEs which builds the neural network architecture adaptively. It is the first training algorithm that observes the convergence order of neural networks numerically. This innovative algorithm is tested on several benchmark examples in both 1D and 2D to confirm its efficiency and robustness.
79 - Elise Grosjean 2021
The context of this paper is the simulation of parameter-dependent partial differential equations (PDEs). When the aim is to solve such PDEs for a large number of parameter values, Reduced Basis Methods (RBM) are often used to reduce computational costs of a classical high fidelity code based on Finite Element Method (FEM), Finite Volume (FVM) or Spectral methods. The efficient implementation of most of these RBM requires to modify this high fidelity code, which cannot be done, for example in an industrial context if the high fidelity code is only accessible as a black-box solver. The Non Intrusive Reduced Basis method (NIRB) has been introduced in the context of finite elements as a good alternative to reduce the implementation costs of these parameter-dependent problems. The method is efficient in other contexts than the FEM one, like with finite volume schemes, which are more often used in an industrial environment. In this case, some adaptations need to be done as the degrees of freedom in FV methods have different meenings. At this time, error estimates have only been studied with FEM solvers. In this paper, we present a generalisation of the NIRB method to Finite Volume schemes and we show that estimates established for FEM solvers also hold in the FVM setting. We first prove our results for the hybrid-Mimetic Finite Difference method (hMFD), which is part the Hybrid Mixed Mimetic methods (HMM) family. Then, we explain how these results apply more generally to other FV schemes. Some of them are specified, such as the Two Point Flux Approximation (TPFA).
The need for multiple interactive, real-time simulations using different parameter values has driven the design of fast numerical algorithms with certifiable accuracies. The reduced basis method (RBM) presents itself as such an option. RBM features a mathematically rigorous error estimator which drives the construction of a low-dimensional subspace. A surrogate solution is then sought in this low-dimensional space approximating the parameter-induced high fidelity solution manifold. However when the system is nonlinear or its parameter dependence nonaffine, this efficiency gain degrades tremendously, an inherent drawback of the application of the empirical interpolation method (EIM). In this paper, we augment and extend the EIM approach as a direct solver, as opposed to an assistant, for solving nonlinear partial differential equations on the reduced level. The resulting method, called Reduced Over-Collocation method (ROC), is stable and capable of avoiding the efficiency degradation. Two critical ingredients of the scheme are collocation at about twice as many locations as the number of basis elements for the reduced approximation space, and an efficient error indicator for the strategic building of the reduced solution space. The latter, the main contribution of this paper, results from an adaptive hyper reduction of the residuals for the reduced solution. Together, these two ingredients render the proposed R2-ROC scheme both offline- and online-efficient. A distinctive feature is that the efficiency degradation appearing in traditional RBM approaches that utilize EIM for nonlinear and nonaffine problems is circumvented, both in the offline and online stages. Numerical tests on different families of time-dependent and steady-state nonlinear problems demonstrate the high efficiency and accuracy of our R2-ROC and its superior stability performance.
Finite-time coherent sets inhibit mixing over finite times. The most expensive part of the transfer operator approach to detecting coherent sets is the construction of the operator itself. We present a numerical method based on radial basis function collocation and apply it to a recent transfer operator construction that has been designed specifically for purely advective dynamics. The construction is based on a dynamic Laplacian operator and minimises the boundary size of the coherent sets relative to their volume. The main advantage of our new approach is a substantial reduction in the number of Lagrangian trajectories that need to be computed, leading to large speedups in the transfer operator analysis when this computation is costly.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا