ترغب بنشر مسار تعليمي؟ اضغط هنا

Lazy global feedbacks for quantized nonlinear event systems

56   0   0.0 ( 0 )
 نشر من قبل Oliver Junge
 تاريخ النشر 2012
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider nonlinear event systems with quantized state information and design a globally stabilizing controller from which only the minimal required number of control value changes along the feedback trajectory to a given initial condition is transmitted to the plant. In addition, we present a non-optimal heuristic approach which might reduce the number of control value changes and requires a lower computational effort. The constructions are illustrated by two numerical examples.

قيم البحث

اقرأ أيضاً

In this paper the optimal control of alignment models composed by a large number of agents is investigated in presence of a selective action of a controller, acting in order to enhance consensus. Two types of selective controls have been presented: a n homogeneous control filtered by a selective function and a distributed control active only on a selective set. As a first step toward a reduction of computational cost, we introduce a model predictive control (MPC) approximation by deriving a numerical scheme with a feedback selective constrained dynamics. Next, in order to cope with the numerical solution of a large number of interacting agents, we derive the mean-field limit of the feedback selective constrained dynamics, which eventually will be solved numerically by means of a stochastic algorithm, able to simulate efficiently the selective constrained dynamics. Finally, several numerical simulations are reported to show the efficiency of the proposed techniques.
39 - Chris Thron , Ahsan Aziz 2014
In this paper, we present a novel solution for optimal beamforming in a two-way relay (TWR) systems with perfect channel state information. The solution makes use of properties of quadratic surfaces to simplify the solution space of the problem to $m athbb{R}^4$, and enables the formulation of a differential equation that can be solved numerically to obtain the optimal beamforming matrix.
We propose an approach for the synthesis of robust and optimal feedback controllers for nonlinear PDEs. Our approach considers the approximation of infinite-dimensional control systems by a pseudospectral collocation method, leading to high-dimension al nonlinear dynamics. For the reduced-order model, we construct a robust feedback control based on the $cH_{infty}$ control method, which requires the solution of an associated high-dimensional Hamilton-Jacobi-Isaacs nonlinear PDE. The dimensionality of the Isaacs PDE is tackled by means of a separable representation of the control system, and a polynomial approximation ansatz for the corresponding value function. Our method proves to be effective for the robust stabilization of nonlinear dynamics up to dimension $dapprox 12$. We assess the robustness and optimality features of our design over a class of nonlinear parabolic PDEs, including nonlinear advection and reaction terms. The proposed design yields a feedback controller achieving optimal stabilization and disturbance rejection properties, along with providing a modelling framework for the robust control of PDEs under parametric uncertainties.
149 - Yongxin Chen 2021
We consider the covariance steering problem for nonlinear control-affine systems. Our objective is to find an optimal control strategy to steer the state of a system from an initial distribution to a target one whose mean and covariance are given. Du e to the nonlinearity, the existing techniques for linear covariance steering problems are not directly applicable. By leveraging the celebrated Girsanov theorem, we formulate the problem into an optimization over the space path distributions. We then adopt a generalized proximal gradient algorithm to solve this optimization, where each update requires solving a linear covariance steering problem. Our algorithm is guaranteed to converge to a local optimal solution with a sublinear rate. In addition, each iteration of the algorithm can be achieved in closed form, and thus the computational complexity of it is insensitive to the resolution of time-discretization.
In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this lazy training phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that lazy training is behind the many successes of neural networks in difficult high dimensional tasks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا