ترغب بنشر مسار تعليمي؟ اضغط هنا

$H_2$ model reduction for diffusively coupled second-order networks by convex-optimization

68   0   0.0 ( 0 )
 نشر من قبل Xiaodong Cheng
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper provides an $H_2$ optimal scheme for reducing diffusively coupled second-order systems evolving over undirected networks. The aim is to find a reduced-order model that not only approximates the input-output mapping of the original system but also preserves crucial structures, such as the second-order form, asymptotically stability, and diffusive couplings. To this end, an $H_2$ optimal approach based on a convex relaxation is implemented to reduce the dimension, yielding a lower order asymptotically stable approximation of the original second-order network system. Then, a novel graph reconstruction approach is employed to convert the obtained model to a reduced system that is interpretable as an undirected diffusively coupled network. Finally, the effectiveness of the proposed method is illustrated via a large-scale networked mass-spring-damper system.



قيم البحث

اقرأ أيضاً

In this paper we study second-order optimality conditions for non-convex set-constrained optimization problems. For a convex set-constrained optimization problem, it is well-known that second-order optimality conditions involve the support function o f the second-order tangent set. In this paper we propose two approaches for establishing second-order optimality conditions for the non-convex case. In the first approach we extend the concept of the support function so that it is applicable to general non-convex set-constrained problems, whereas in the second approach we introduce the notion of the directional regular tangent cone and apply classical results of convex duality theory. Besides the second-order optimality conditions, the novelty of our approach lies in the systematic introduction and use, respectively, of direction
We consider linear optimization over a fixed compact convex feasible region that is semi-algebraic (or, more generally, tame). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region i s partly smooth, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, second-order optimality conditions hold, guaranteeing smooth behavior of the optimal solution under small perturbations to the objective.
We describe an active-set method for the minimization of an objective function $phi$ that is the sum of a smooth convex function and an $ell_1$-regularization term. A distinctive feature of the method is the way in which active-set identification and {second-order} subspace minimization steps are integrated to combine the predictive power of the two approaches. At every iteration, the algorithm selects a candidate set of free and fixed variables, performs an (inexact) subspace phase, and then assesses the quality of the new active set. If it is not judged to be acceptable, then the set of free variables is restricted and a new active-set prediction is made. We establish global convergence for our approach, and compare the new method against the state-of-the-art code LIBLINEAR.
The usual approach to developing and analyzing first-order methods for smooth convex optimization assumes that the gradient of the objective function is uniformly smooth with some Lipschitz constant $L$. However, in many settings the differentiable c onvex function $f(cdot)$ is not uniformly smooth -- for example in $D$-optimal design where $f(x):=-ln det(HXH^T)$, or even the univariate setting with $f(x) := -ln(x) + x^2$. Herein we develop a notion of relative smoothness and relative strong convexity that is determined relative to a user-specified reference function $h(cdot)$ (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth with respect to a correspondingly fairly-simple reference function $h(cdot)$. We extend two standard algorithms -- the primal gradient scheme and the dual averaging scheme -- to our new setting, with associated computational guarantees. We apply our new approach to develop a new first-order method for the $D$-optimal design problem, with associated computational complexity analysis. Some of our results have a certain overlap with the recent work cite{bbt}.
We study gradient-based optimization methods obtained by direct Runge-Kutta discretization of the ordinary differential equation (ODE) describing the movement of a heavy-ball under constant friction coefficient. When the function is high order smooth and strongly convex, we show that directly simulating the ODE with known numerical integrators achieve acceleration in a nontrivial neighborhood of the optimal solution. In particular, the neighborhood can grow larger as the condition number of the function increases. Furthermore, our results also hold for nonconvex but quasi-strongly convex objectives. We provide numerical experiments that verify the theoretical rates predicted by our results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا