ترغب بنشر مسار تعليمي؟ اضغط هنا

Traffic congestion anomaly detection and prediction using deep learning

158   0   0.0 ( 0 )
 نشر من قبل Marian-Andrei Rizoiu
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Congestion prediction represents a major priority for traffic management centres around the world to ensure timely incident response handling. The increasing amounts of generated traffic data have been used to train machine learning predictors for traffic, however, this is a challenging task due to inter-dependencies of traffic flow both in time and space. Recently, deep learning techniques have shown significant prediction improvements over traditional models, however, open questions remain around their applicability, accuracy and parameter tuning. This paper brings two contributions in terms of: 1) applying an outlier detection an anomaly adjustment method based on incoming and historical data streams, and 2) proposing an advanced deep learning framework for simultaneously predicting the traffic flow, speed and occupancy on a large number of monitoring stations along a highly circulated motorway in Sydney, Australia, including exit and entry loop count stations, and over varying training and prediction time horizons. The spatial and temporal features extracted from the 36.34 million data points are used in various deep learning architectures that exploit their spatial structure (convolutional neuronal networks), their temporal dynamics (recurrent neuronal networks), or both through a hybrid spatio-temporal modelling (CNN-LSTM). We show that our deep learning models consistently outperform traditional methods, and we conduct a comparative analysis of the optimal time horizon of historical data required to predict traffic flow at different time points in the future. Lastly, we prove that the anomaly adjustment method brings significant improvements to using deep learning in both time and space.



قيم البحث

اقرأ أيضاً

We consider the problem of detecting anomalies among a given set of processes using their noisy binary sensor measurements. The noiseless sensor measurement corresponding to a normal process is 0, and the measurement is 1 if the process is anomalous. The decision-making algorithm is assumed to have no knowledge of the number of anomalous processes. The algorithm is allowed to choose a subset of the sensors at each time instant until the confidence level on the decision exceeds the desired value. Our objective is to design a sequential sensor selection policy that dynamically determines which processes to observe at each time and when to terminate the detection algorithm. The selection policy is designed such that the anomalous processes are detected with the desired confidence level while incurring minimum cost which comprises the delay in detection and the cost of sensing. We cast this problem as a sequential hypothesis testing problem within the framework of Markov decision processes, and solve it using the actor-critic deep reinforcement learning algorithm. This deep neural network-based algorithm offers a low complexity solution with good detection accuracy. We also study the effect of statistical dependence between the processes on the algorithm performance. Through numerical experiments, we show that our algorithm is able to adapt to any unknown statistical dependence pattern of the processes.
The concept of mobility prediction represents one of the key enablers for an efficient management of future cellular networks, which tend to be progressively more elaborate and dense due to the aggregation of multiple technologies. In this letter we aim to investigate the problem of cellular traffic prediction over a metropolitan area and propose a deep regression (DR) approach to model its complex spatio-temporal dynamics. DR is instrumental in capturing multi-scale and multi-domain dependences of mobile data by solving an image-to-image regression problem. A parametric relationship between input and expected output is defined and grid search is put in place to isolate and optimize performance. Experimental results confirm that the proposed method achieves a lower prediction error against stateof-the-art algorithms. We validate forecasting performance and stability by using a large public dataset of a European Provider.
Botnets and malware continue to avoid detection by static rules engines when using domain generation algorithms (DGAs) for callouts to unique, dynamically generated web addresses. Common DGA detection techniques fail to reliably detect DGA variants t hat combine random dictionary words to create domain names that closely mirror legitimate domains. To combat this, we created a novel hybrid neural network, Bilbo the `bagging` model, that analyses domains and scores the likelihood they are generated by such algorithms and therefore are potentially malicious. Bilbo is the first parallel usage of a convolutional neural network (CNN) and a long short-term memory (LSTM) network for DGA detection. Our unique architecture is found to be the most consistent in performance in terms of AUC, F1 score, and accuracy when generalising across different dictionary DGA classification tasks compared to current state-of-the-art deep learning architectures. We validate using reverse-engineered dictionary DGA domains and detail our real-time implementation strategy for scoring real-world network logs within a large financial enterprise. In four hours of actual network traffic, the model discovered at least five potential command-and-control networks that commercial vendor tools did not flag.
With the widespread adoption of cloud services, especially the extensive deployment of plenty of Web applications, it is important and challenging to detect anomalies from the packet payload. For example, the anomalies in the packet payload can be ex pressed as a number of specific strings which may cause attacks. Although some approaches have achieved remarkable progress, they are with limited applications since they are dependent on in-depth expert knowledge, e.g., signatures describing anomalies or communication protocol at the application level. Moreover, they might fail to detect the payload anomalies that have long-term dependency relationships. To overcome these limitations and adaptively detect anomalies from the packet payload, we propose a deep learning based framework which consists of two steps. First, a novel feature engineering method is proposed to obtain the block-based features via block sequence extraction and block embedding. The block-based features could encapsulate both the high-dimension information and the underlying sequential information which facilitate the anomaly detection. Second, a neural network is designed to learn the representation of packet payload based on Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN). Furthermore, we cast the anomaly detection as a classification problem and stack a Multi-Layer Perception (MLP) on the above representation learning network to detect anomalies. Extensive experimental results on three public datasets indicate that our model could achieve a higher detection rate, while keeping a lower false positive rate compared with five state-of-the-art methods.
With the advent of 5G and the research into beyond 5G (B5G) networks, a novel and very relevant research issue is how to manage the coexistence of different types of traffic, each with very stringent but completely different requirements. In this pap er we propose a deep reinforcement learning (DRL) algorithm to slice the available physical layer resources between ultra-reliable low-latency communications (URLLC) and enhanced Mobile BroadBand (eMBB) traffic. Specifically, in our setting the time-frequency resource grid is fully occupied by eMBB traffic and we train the DRL agent to employ proximal policy optimization (PPO), a state-of-the-art DRL algorithm, to dynamically allocate the incoming URLLC traffic by puncturing eMBB codewords. Assuming that each eMBB codeword can tolerate a certain limited amount of puncturing beyond which is in outage, we show that the policy devised by the DRL agent never violates the latency requirement of URLLC traffic and, at the same time, manages to keep the number of eMBB codewords in outage at minimum levels, when compared to other state-of-the-art schemes.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا