ترغب بنشر مسار تعليمي؟ اضغط هنا

Numerics for Stochastic Distributed Parameter Control Systems: a Finite Transposition Method

70   0   0.0 ( 0 )
 نشر من قبل Yanqing Wang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this chapter, we present some recent progresses on the numerics for stochastic distributed parameter control systems, based on the emph{finite transposition method} introduced in our previous works. We first explain how to reduce the numerics of some stochastic control problems in this respect to the numerics of backward stochastic evolution equations. Then we present a method to find finite transposition solutions to such equations. At last, we give an illuminating example.



قيم البحث

اقرأ أيضاً

252 - Paul Manns , Stefan Ulbrich 2021
In PDE-constrained optimization, proper orthogonal decomposition (POD) provides a surrogate model of a (potentially expensive) PDE discretization, on which optimization iterations are executed. Because POD models usually provide good approximation qu ality only locally, they have to be updated during optimization. Updating the POD model is usually expensive, however, and therefore often impossible in a model-predictive control (MPC) context. Thus, reduced models of mediocre quality might be accepted. We take the view of a simplified Newton method for solving semilinear evolution equations to derive an algorithm that can serve as an offline phase to produce a POD model. Approaches that build the POD model with impulse response snapshots can be regarded as the first Newton step in this context. In particular, POD models that are based on impulse response snapshots are extended by adding a second simplified Newton step. This procedure improves the approximation quality of the POD model significantly by introducing a moderate amount of extra computational costs during optimization or the MPC loop. We illustrate our findings with an example satisfying our assumptions.
96 - Xiaoyu Wu , Yin Xu , Xiangyu Wu 2018
This paper presents a two-layer, four-level distributed control method for networked microgrid (NMG) systems, taking into account the proprietary nature of microgrid (MG) owners. The proposed control architecture consists of a MG-control layer and a NMG-control layer. In the MG layer, the primary and distrib-uted secondary control realize accurate power sharing among distributed generators (DGs) and the frequency/voltage reference following within each MG. In the NMG layer, the tertiary control enables regulation of the power flowing through the point of common coupling (PCC) of each MG in a decentralized manner. Furthermore, the distributed quaternary control restores system frequency and critical bus voltage to their nominal values and ensures accurate power sharing among MGs. A small-signal dynamic model is developed to evaluate dynamic performance of NMG systems with the proposed control method. Time-domain simulations as well as experiments on NMG test systems are performed to validate the effectiveness of the proposed method.
We introduce a time-implicit, finite-element based space-time discretization scheme for the backward stochastic heat equation, and for the forward-backward stochastic heat equation from stochastic optimal control, and prove strong rates of convergenc e. The fully discrete version of the forward-backward stochastic heat equation is then used within a gradient descent algorithm to approximately solve the linear-quadratic control problem for the stochastic heat equation driven by additive noise.
We propose a time-implicit, finite-element based space-time discretization of the necessary and sufficient optimality conditions for the stochastic linear-quadratic optimal control problem with the stochastic heat equation driven by linear noise of t ype $[X(t)+sigma(t)]dW(t)$, and prove optimal convergence w.r.t. both, space and time discretization parameters. In particular, we employ the stochastic Riccati equation as a proper analytical tool to handle the linear noise, and thus extend the applicability of the earlier work [16], where the error analysis was restricted to additive noise.
We present AUQ-ADMM, an adaptive uncertainty-weighted consensus ADMM method for solving large-scale convex optimization problems in a distributed manner. Our key contribution is a novel adaptive weighting scheme that empirically increases the progres s made by consensus ADMM scheme and is attractive when using a large number of subproblems. The weights are related to the uncertainty associated with the solutions of each subproblem, and are efficiently computed using low-rank approximations. We show AUQ-ADMM provably converges and demonstrate its effectiveness on a series of machine learning applications, including elastic net regression, multinomial logistic regression, and support vector machines. We provide an implementation based on the PyTorch package.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا