ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Control Via Neural Networks: A Convex Approach

234   0   0.0 ( 0 )
 نشر من قبل Yize Chen
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5* less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.



قيم البحث

اقرأ أيضاً

120 - Pengcheng Zhao , Shankar Mohan , 2016
This paper addresses the problem of control synthesis for nonlinear optimal control problems in the presence of state and input constraints. The presented approach relies upon transforming the given problem into an infinite-dimensional linear program over the space of measures. To generate approximations to this infinite-dimensional program, a sequence of Semi-Definite Programs (SDP)s is formulated in the instance of polynomial cost and dynamics with semi-algebraic state and bounded input constraints. A method to extract a polynomial control function from each SDP is also given. This paper proves that the controller synthesized from each of these SDPs generates a sequence of values that converge from below to the value of the optimal control of the original optimal control problem. In contrast to existing approaches, the presented method does not assume that the optimal control is continuous while still proving that the sequence of approximations is optimal. Moreover, the sequence of controllers that are synthesized using the presented approach are proven to converge to the true optimal control. The performance of the presented method is demonstrated on three examples.
The increasing penetration of renewables in distribution networks calls for faster and more advanced voltage regulation strategies. A promising approach is to formulate the problem as an optimization problem, where the optimal reactive power injectio n from inverters are calculated to maintain the voltages while satisfying power network constraints. However, existing optimization algorithms require the exact topology and line parameters of underlying distribution system, which are not known for most cases and are difficult to infer. In this paper, we propose to use specifically designed neural network to tackle the learning and optimization problem together. In the training stage, the proposed input convex neural network learns the mapping between the power injections and the voltages. In the voltage regulation stage, such trained network can find the optimal reactive power injections by design. We also provide a practical distributed algorithm by using the trained neural network. Theoretical bounds on the representation performance and learning efficiency of proposed model are also discussed. Numerical simulations on multiple test systems are conducted to illustrate the operation of the algorithm.
This paper considers the optimal control for hybrid systems whose trajectories transition between distinct subsystems when state-dependent constraints are satisfied. Though this class of systems is useful while modeling a variety of physical systems undergoing contact, the construction of a numerical method for their optimal control has proven challenging due to the combinatorial nature of the state-dependent switching and the potential discontinuities that arise during switches. This paper constructs a convex relaxation-based approach to solve this optimal control problem. Our approach begins by formulating the problem in the space of relaxed controls, which gives rise to a linear program whose solution is proven to compute the globally optimal controller. This conceptual program is solved by constructing a sequence of semidefinite programs whose solutions are proven to converge from below to the true solution of the original optimal control problem. Finally, a method to synthesize the optimal controller is developed. Using an array of examples, the performance of the proposed method is validated on problems with known solutions and also compared to a commercial solver.
We propose a neural network approach for solving high-dimensional optimal control problems arising in real-time applications. Our approach yields controls in a feedback form and can therefore handle uncertainties such as perturbations to the systems state. We accomplish this by fusing the Pontryagin Maximum Principle (PMP) and Hamilton-Jacobi-Bellman (HJB) approaches and parameterizing the value function with a neural network. We train our neural network model using the objective function of the control problem and penalty terms that enforce the HJB equations. Therefore, our training algorithm does not involve data generated by another algorithm. By training on a distribution of initial states, we ensure the controls optimality on a large portion of the state-space. Our grid-free approach scales efficiently to dimensions where grids become impractical or infeasible. We demonstrate the effectiveness of our approach on several multi-agent collision-avoidance problems in up to 150 dimensions. Furthermore, we empirically observe that the number of parameters in our approach scales linearly with the dimension of the control problem, thereby mitigating the curse of dimensionality.
We propose a neural network approach for solving high-dimensional optimal control problems. In particular, we focus on multi-agent control problems with obstacle and collision avoidance. These problems immediately become high-dimensional, even for mo derate phase-space dimensions per agent. Our approach fuses the Pontryagin Maximum Principle and Hamilton-Jacobi-Bellman (HJB) approaches and parameterizes the value function with a neural network. Our approach yields controls in a feedback form for quick calculation and robustness to moderate disturbances to the system. We train our model using the objective function and optimality conditions of the control problem. Therefore, our training algorithm neither involves a data generation phase nor solutions from another algorithm. Our model uses empirically effective HJB penalizers for efficient training. By training on a distribution of initial states, we ensure the controls optimality is achieved on a large portion of the state-space. Our approach is grid-free and scales efficiently to dimensions where grids become impractical or infeasible. We demonstrate our approachs effectiveness on a 150-dimensional multi-agent problem with obstacles.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا