ﻻ يوجد ملخص باللغة العربية
The suspension regulation is critical to the operation of medium-low-speed maglev trains (mlsMTs). Due to uncertain environment, strong disturbances and high nonlinearity of the system dynamics, this problem cannot be well solved by most of the model-based controllers. In this paper, we propose a model-free controller by reformulating it as a continuous-state, continuous-action Markov decision process (MDP) with unknown transition probabilities. With the deterministic policy gradient and neural network approximation, we design reinforcement learning (RL) algorithms to solve the MDP and obtain a state-feedback controller by using sampled data from the suspension system. To further improve its performance, we adopt a double Q-learning scheme for learning the regulation controller. We illustrate that the proposed controllers outperform the existing PID controller with a real dataset from the mlsMT in Changsha, China and is even comparable to model-based controllers, which assume that the complete information of the model is known, via simulations.
Emergency control, typically such as under-voltage load shedding (UVLS), is broadly used to grapple with low voltage and voltage instability issues in practical power systems under contingencies. However, existing emergency control schemes are rule-b
We propose a novel controller synthesis involving feedback from pixels, whereby the measurement is a high dimensional signal representing a pixelated image with Red-Green-Blue (RGB) values. The approach neither requires feature extraction, nor object
In this paper, we study how to learn an appropriate lane changing strategy for autonomous vehicles by using deep reinforcement learning. We show that the reward of the system should consider the overall traffic efficiency instead of the travel effici
Network dismantling aims to degrade the connectivity of a network by removing an optimal set of nodes and has been widely adopted in many real-world applications such as epidemic control and rumor containment. However, conventional methods usually fo
This paper develops a model-free volt-VAR optimization (VVO) algorithm via multi-agent deep reinforcement learning (MADRL) in unbalanced distribution systems. This method is novel since we cast the VVO problem in unbalanced distribution networks to a