No Arabic abstract
This paper presents a novel AI-based approach for maximizing time-series available transfer capabilities (ATCs) via autonomous topology control considering various practical constraints and uncertainties. Several AI techniques including supervised learning and deep reinforcement learning (DRL) are adopted and improved to train effective AI agents for achieving the desired performance. First, imitation learning (IL) is used to provide a good initial policy for the AI agent. Then, the agent is trained by DRL algorithms with a novel guided exploration technique, which significantly improves the training efficiency. Finally, an Early Warning (EW) mechanism is designed to help the agent find good topology control strategies for long testing periods, which helps the agent to determine action timing using power system domain knowledge; thus, effectively increases the system error-tolerance and robustness. Effectiveness of the proposed approach is demonstrated in the 2019 Learn to Run a Power Network (L2RPN) global competition, where the developed AI agents can continuously and safely control a power grid to maximize ATCs without operators intervention for up to 1-months operation data and eventually won the first place in both development and final phases of the competition. The winning agent has been open-sourced on GitHub.
Signal processing and machine learning algorithms for data supported over graphs, require the knowledge of the graph topology. Unless this information is given by the physics of the problem (e.g., water supply networks, power grids), the topology has to be learned from data. Topology identification is a challenging task, as the problem is often ill-posed, and becomes even harder when the graph structure is time-varying. In this paper, we address the problem of dynamic topology identification by building on recent results from time-varying optimization, devising a general-purpose online algorithm operating in non-stationary environments. Because of its iteration-constrained nature, the proposed approach exhibits an intrinsic temporal-regularization of the graph topology without explicitly enforcing it. As a case-study, we specialize our method to the Gaussian graphical model (GGM) problem and corroborate its performance.
Epidemic control is of great importance for human society. Adjusting interacting partners is an effective individualized control strategy. Intuitively, it is done either by shortening the interaction time between susceptible and infected individuals or by increasing the opportunities for contact between susceptible individuals. Here, we provide a comparative study on these two control strategies by establishing an epidemic model with non-uniform stochastic interactions. It seems that the two strategies should be similar, since shortening the interaction time between susceptible and infected individuals somehow increases the chances for contact between susceptible individuals. However, analytical results indicate that the effectiveness of the former strategy sensitively depends on the infectious intensity and the combinations of different interaction rates, whereas the latter one is quite robust and efficient. Simulations are shown in comparison with our analytical predictions. Our work may shed light on the strategic choice of disease control.
The recent advancements in cloud services, Internet of Things (IoT) and Cellular networks have made cloud computing an attractive option for intelligent traffic signal control (ITSC). Such a method significantly reduces the cost of cables, installation, number of devices used, and maintenance. ITSC systems based on cloud computing lower the cost of the ITSC systems and make it possible to scale the system by utilizing the existing powerful cloud platforms. While such systems have significant potential, one of the critical problems that should be addressed is the network delay. It is well known that network delay in message propagation is hard to prevent, which could potentially degrade the performance of the system or even create safety issues for vehicles at intersections. In this paper, we introduce a new traffic signal control algorithm based on reinforcement learning, which performs well even under severe network delay. The framework introduced in this paper can be helpful for all agent-based systems using remote computing resources where network delay could be a critical concern. Extensive simulation results obtained for different scenarios show the viability of the designed algorithm to cope with network delay.
Sensor-based time series analysis is an essential task for applications such as activity recognition and brain-computer interface. Recently, features extracted with deep neural networks (DNNs) are shown to be more effective than conventional hand-crafted ones. However, most of these solutions rely solely on the network to extract application-specific information carried in the sensor data. Motivated by the fact that usually a small subset of the frequency components carries the primary information for sensor data, we propose a novel tree-structured wavelet neural network for sensor data analysis, namely emph{T-WaveNet}. To be specific, with T-WaveNet, we first conduct a power spectrum analysis for the sensor data and decompose the input signal into various frequency subbands accordingly. Then, we construct a tree-structured network, and each node on the tree (corresponding to a frequency subband) is built with an invertible neural network (INN) based wavelet transform. By doing so, T-WaveNet provides more effective representation for sensor information than existing DNN-based techniques, and it achieves state-of-the-art performance on various sensor datasets, including UCI-HAR for activity recognition, OPPORTUNITY for gesture recognition, BCICIV2a for intention recognition, and NinaPro DB1 for muscular movement recognition.
Vehicle-to-vehicle communications can be unreliable as interference causes communication failures. Thereby, the information flow topology for a platoon of Connected Autonomous Vehicles (CAVs) can vary dynamically. This limits existing Cooperative Adaptive Cruise Control (CACC) strategies as most of them assume a fixed information flow topology (IFT). To address this problem, we introduce a CACC design that considers a dynamic information flow topology (CACC-DIFT) for CAV platoons. An adaptive Proportional-Derivative (PD) controller under a two-predecessor-following IFT is proposed to reduce the negative effects when communication failures occur. The PD controller parameters are determined to ensure the string stability of the platoon. Further, the designed controller also factors the performance of individual vehicles. Hence, when communication failure occurs, the system will switch to a certain type of CACC instead of degenerating to adaptive cruise control, which improves the control performance considerably. The effectiveness of the proposed CACC-DIFT is validated through numerical experiments based on NGSIM field data. Results indicate that the proposed CACC-DIFT design outperforms a CACC with a predetermined information flow topology.