ترغب بنشر مسار تعليمي؟ اضغط هنا

Resilient UAV Traffic Congestion Control using Fluid Queuing Models

65   0   0.0 ( 0 )
 نشر من قبل Jiazhen Zhou
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we address the issue of congestion in future Unmanned Aerial Vehicle (UAVs) traffic system in uncertain weather. We treat the traffic of UAVs as fluid queues, and introduce models for traffic dynamics at three basic traffic components: single link, tandem link, and merge link. The impact of weather uncertainty is captured as fluctuation of the saturation rate of fluid queue discharge (capacity). The uncertainty is assumed to follow a continuous-time Markov process. We define the resilience of the UAV traffic system as the long-run stability of the traffic queues and the optimal throughput strategy under uncertainties. We derive the necessary and sufficient conditions for the stabilities of the traffic queues in the three basic traffic components. Both conditions can be easily verified in practiceB. The optimal throughput can be calculated via the stability conditions. Our results offer strong insight and tool for designing flows in the UAV traffic system that is resilient against weather uncertainty.

قيم البحث

اقرأ أيضاً

50 - Shuai Feng , Pietro Tesi 2016
In this paper, we study networked control systems in the presence of Denial-of-Service (DoS) attacks, namely attacks that prevent transmissions over the communication network. The control objective is to maximize frequency and duration of the DoS att acks under which closed-loop stability is not destroyed. Analog and digital predictor-based controllers with state resetting are proposed, which achieve the considered control objective for a general class of DoS signals. An example is given to illustrate the proposed solution approach.
79 - Giacomo Como 2017
Resilience has become a key aspect in the design of contemporary infrastructure networks. This comes as a result of ever-increasing loads, limited physical capacity, and fast-growing levels of interconnectedness and complexity due to the recent techn ological advancements. The problem has motivated a considerable amount of research within the last few years, particularly focused on the dynamical aspects of network flows, complementing more classical static network flow optimization approaches. In this tutorial paper, a class of single-commodity first-order models of dynamical flow networks is considered. A few results recently appeared in the literature and dealing with stability and robustness of dynamical flow networks are gathered and originally presented in a unified framework. In particular, (differential) stability properties of monotone dynamical flow networks are treated in some detail, and the notion of margin of resilience is introduced as a quantitative measure of their robustness. While emphasizing methodological aspects -- including structural properties, such as monotonicity, that enable tractability and scalability -- over the specific applications, connections to well-established road traffic flow models are made.
This paper proposes a reinforcement learning approach for traffic control with the adaptive horizon. To build the controller for the traffic network, a Q-learning-based strategy that controls the green light passing time at the network intersections is applied. The controller includes two components: the regular Q-learning controller that controls the traffic light signal, and the adaptive controller that continuously optimizes the action space for the Q-learning algorithm in order to improve the efficiency of the Q-learning algorithm. The regular Q-learning controller uses the control cost function as a reward function to determine the action to choose. The adaptive controller examines the control cost and updates the action space of the controller by determining the subset of actions that are most likely to obtain optimal results and shrinking the action space to that subset. Uncertainties in traffic influx and turning rate are introduced to test the robustness of the controller under a stochastic environment. Compared with those with model predictive control (MPC), the results show that the proposed Q-learning-based controller outperforms the MPC method by reaching a stable solution in a shorter period and achieves lower control costs. The proposed Q-learning-based controller is also robust under 30% traffic demand uncertainty and 15% turning rate uncertainty.
Congestion prediction represents a major priority for traffic management centres around the world to ensure timely incident response handling. The increasing amounts of generated traffic data have been used to train machine learning predictors for tr affic, however, this is a challenging task due to inter-dependencies of traffic flow both in time and space. Recently, deep learning techniques have shown significant prediction improvements over traditional models, however, open questions remain around their applicability, accuracy and parameter tuning. This paper brings two contributions in terms of: 1) applying an outlier detection an anomaly adjustment method based on incoming and historical data streams, and 2) proposing an advanced deep learning framework for simultaneously predicting the traffic flow, speed and occupancy on a large number of monitoring stations along a highly circulated motorway in Sydney, Australia, including exit and entry loop count stations, and over varying training and prediction time horizons. The spatial and temporal features extracted from the 36.34 million data points are used in various deep learning architectures that exploit their spatial structure (convolutional neuronal networks), their temporal dynamics (recurrent neuronal networks), or both through a hybrid spatio-temporal modelling (CNN-LSTM). We show that our deep learning models consistently outperform traditional methods, and we conduct a comparative analysis of the optimal time horizon of historical data required to predict traffic flow at different time points in the future. Lastly, we prove that the anomaly adjustment method brings significant improvements to using deep learning in both time and space.
This paper presents the results of a new deep learning model for traffic signal control. In this model, a novel state space approach is proposed to capture the main attributes of the control environment and the underlying temporal traffic movement pa tterns, including time of day, day of the week, signal status, and queue lengths. The performance of the model was examined over nine weeks of simulated data on a single intersection and compared to a semi-actuated and fixed time traffic controller. The simulation analysis shows an average delay reductions of 32% when compared to actuated control and 37% when compared to fixed time control. The results highlight the potential for deep reinforcement learning as a signal control optimization method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا