ترغب بنشر مسار تعليمي؟ اضغط هنا

Nonlinear reduced models for state and parameter estimation

120   0   0.0 ( 0 )
 نشر من قبل Olga Mula
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

State estimation aims at approximately reconstructing the solution $u$ to a parametrized partial differential equation from $m$ linear measurements, when the parameter vector $y$ is unknown. Fast numerical recovery methods have been proposed based on reduced models which are linear spaces of moderate dimension $n$ which are tailored to approximate the solution manifold $mathcal{M}$ where the solution sits. These methods can be viewed as deterministic counterparts to Bayesian estimation approaches, and are proved to be optimal when the prior is expressed by approximability of the solution with respect to the reduced model. However, they are inherently limited by their linear nature, which bounds from below their best possible performance by the Kolmogorov width $d_m(mathcal{M})$ of the solution manifold. In this paper we propose to break this barrier by using simple nonlinear reduced models that consist of a finite union of linear spaces $V_k$, each having dimension at most $m$ and leading to different estimators $u_k^*$. A model selection mechanism based on minimizing the PDE residual over the parameter space is used to select from this collection the final estimator $u^*$. Our analysis shows that $u^*$ meets optimal recovery benchmarks that are inherent to the solution manifold and not tied to its Kolmogorov width. The residual minimization procedure is computationally simple in the relevant case of affine parameter dependence in the PDE. In addition, it results in an estimator $y^*$ for the unknown parameter vector. In this setting, we also discuss an alternating minimization (coordinate descent) algorithm for joint state and parameter estimation, that potentially improves the quality of both estimators.



قيم البحث

اقرأ أيضاً

The exploration of complex physical or technological processes usually requires exploiting available information from different sources: (i) physical laws often represented as a family of parameter dependent partial differential equations and (ii) da ta provided by measurement devices or sensors. The amount of sensors is typically limited and data acquisition may be expensive and in some cases even harmful. This article reviews some recent developments for this small-data scenario where inversion is strongly aggravated by the typically large parametric dimensionality. The proposed concepts may be viewed as exploring alternatives to Bayesian inversion in favor of more deterministic accuracy quantification related to the required computational complexity. We discuss optimality criteria which delineate intrinsic information limits, and highlight the role of reduced models for developing efficient computational strategies. In particular, the need to adapt the reduced models -- not to a specific (possibly noisy) data set but rather to the sensor system -- is a central theme. This, in turn, is facilitated by exploiting geometric perspectives based on proper stable variational formulations of the continuous model.
Reduced model spaces, such as reduced basis and polynomial chaos, are linear spaces $V_n$ of finite dimension $n$ which are designed for the efficient approximation of families parametrized PDEs in a Hilbert space $V$. The manifold $mathcal{M}$ that gathers the solutions of the PDE for all admissible parameter values is globally approximated by the space $V_n$ with some controlled accuracy $epsilon_n$, which is typically much smaller than when using standard approximation spaces of the same dimension such as finite elements. Reduced model spaces have also been proposed in [13] as a vehicle to design a simple linear recovery algorithm of the state $uinmathcal{M}$ corresponding to a particular solution when the values of parameters are unknown but a set of data is given by $m$ linear measurements of the state. The measurements are of the form $ell_j(u)$, $j=1,dots,m$, where the $ell_j$ are linear functionals on $V$. The analysis of this approach in [2] shows that the recovery error is bounded by $mu_nepsilon_n$, where $mu_n=mu(V_n,W)$ is the inverse of an inf-sup constant that describe the angle between $V_n$ and the space $W$ spanned by the Riesz representers of $(ell_1,dots,ell_m)$. A reduced model space which is efficient for approximation might thus be ineffective for recovery if $mu_n$ is large or infinite. In this paper, we discuss the existence and construction of an optimal reduced model space for this recovery method, and we extend our search to affine spaces. Our basic observation is that this problem is equivalent to the search of an optimal affine algorithm for the recovery of $mathcal{M}$ in the worst case error sense. This allows us to perform our search by a convex optimization procedure. Numerical tests illustrate that the reduced model spaces constructed from our approach perform better than the classical reduced basis spaces.
The task of repeatedly solving parametrized partial differential equations (pPDEs) in, e.g. optimization or interactive applications, makes it imperative to design highly efficient and equally accurate surrogate models. The reduced basis method (RBM) presents as such an option. Enabled by a mathematically rigorous error estimator, RBM constructs a low-dimensional subspace of the parameter-induced high fidelity solution manifold from which an approximate solution is computed. It can improve efficiency by several orders of magnitudes leveraging an offline-online decomposition procedure. However, this decomposition, usually through the empirical interpolation method (EIM) when the PDE is nonlinear or its parameter dependence nonaffine, is either challenging to implement, or severely degrades online efficiency. In this paper, we augment and extend the EIM approach as a direct solver, as opposed to an assistant, for solving nonlinear pPDEs on the reduced level. The resulting method, called Reduced Over-Collocation method (ROC), is stable and capable of avoiding the efficiency degradation inherent to a traditional application of EIM. Two critical ingredients of the scheme are collocation at about twice as many locations as the dimension of the reduced solution space, and an efficient L1-norm-based error indicator for the strategic selection of the parameter values to build the reduced solution space. Together, these two ingredients render the proposed L1-ROC scheme both offline- and online-efficient. A distinctive feature is that the efficiency degradation appearing in alternative RBM approaches that utilize EIM for nonlinear and nonaffine problems is circumvented, both in the offline and online stages. Numerical tests on different families of time-dependent and steady-state nonlinear problems demonstrate the high efficiency and accuracy of L1-ROC and its superior stability performance.
The onerous task of repeatedly resolving certain parametrized partial differential equations (pPDEs) in, e.g. the optimization context, makes it imperative to design vastly more efficient numerical solvers without sacrificing any accuracy. The reduce d basis method (RBM) presents itself as such an option. With a mathematically rigorous error estimator, RBM seeks a surrogate solution in a carefully-built subspace of the parameter-induced high fidelity solution manifold. It can improve efficiency by several orders of magnitudes leveraging an offline-online decomposition procedure. However, this decomposition, usually through the empirical interpolation method (EIM) when the PDE is nonlinear or its parameter dependence nonaffine, is either challenging to implement, or severely degrading to the online efficiency. In this paper, we augment and extend the EIM approach in the context of solving pPDEs in two different ways, resulting in the Reduced Over-Collocation methods (ROC). These are stable and capable of avoiding the efficiency degradation inherent to a direct application of EIM. There are two ingredients of these methods. First is a strategy to collocate at about twice as many locations as the number of bases for the surrogate space. The second is an efficient approach for the strategic selection of the parameter values to build the reduced solution space for which we study two choices, a recent empirical L1 approach and a new indicator based on the reduced residual. Together, these two ingredients render the schemes, L1-ROC and R2-ROC, online efficient and immune from the efficiency degradation of EIM for nonlinear and nonaffine problems offline and online. Numerical tests on three different families of nonlinear problems demonstrate the high efficiency and accuracy of these new algorithms and their superior stability performance.
This work presents a nonintrusive physics-preserving method to learn reduced-order models (ROMs) of Hamiltonian systems. Traditional intrusive projection-based model reduction approaches utilize symplectic Galerkin projection to construct Hamiltonian reduced models by projecting Hamiltons equations of the full model onto a symplectic subspace. This symplectic projection requires complete knowledge about the full model operators and full access to manipulate the computer code. In contrast, the proposed Hamiltonian operator inference approach embeds the physics into the operator inference framework to develop a data-driven model reduction method that preserves the underlying symplectic structure. Our method exploits knowledge of the Hamiltonian functional to define and parametrize a Hamiltonian ROM form which can then be learned from data projected via symplectic projectors. The proposed method is `gray-box in that it utilizes knowledge of the Hamiltonian structure at the partial differential equation level, as well as knowledge of spatially local components in the system. However, it does not require access to computer code, only data to learn the models. Our numerical results demonstrate Hamiltonian operator inference on a linear wave equation, the cubic nonlinear Schr{o}dinger equation, and a nonpolynomial sine-Gordon equation. Accurate long-time predictions far outside the training time interval for nonlinear examples illustrate the generalizability of our learned models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا