ترغب بنشر مسار تعليمي؟ اضغط هنا

Enhance the performance of navigation: A two-stage machine learning approach

128   0   0.0 ( 0 )
 نشر من قبل Yimin Fan
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Real time traffic navigation is an important capability in smart transportation technologies, which has been extensively studied these years. Due to the vast development of edge devices, collecting real time traffic data is no longer a problem. However, real traffic navigation is still considered to be a particularly challenging problem because of the time-varying patterns of the traffic flow and unpredictable accidents/congestion. To give accurate and reliable navigation results, predicting the future traffic flow(speed,congestion,volume,etc) in a fast and accurate way is of great importance. In this paper, we adopt the ideas of ensemble learning and develop a two-stage machine learning model to give accurate navigation results. We model the traffic flow as a time series and apply XGBoost algorithm to get accurate predictions on future traffic conditions(1st stage). We then apply the Top K Dijkstra algorithm to find a set of shortest paths from the give start point to the destination as the candidates of the output optimal path. With the prediction results in the 1st stage, we find one optimal path from the candidates as the output of the navigation algorithm. We show that our navigation algorithm can be greatly improved via EOPF(Enhanced Optimal Path Finding), which is based on neural network(2nd stage). We show that our method can be over 7% better than the method without EOPF in many situations, which indicates the effectiveness of our model.

قيم البحث

اقرأ أيضاً

With the development and widespread use of wireless devices in recent years (mobile phones, Internet of Things, Wi-Fi), the electromagnetic spectrum has become extremely crowded. In order to counter security threats posed by rogue or unknown transmit ters, it is important to identify RF transmitters not by the data content of the transmissions but based on the intrinsic physical characteristics of the transmitters. RF waveforms represent a particular challenge because of the extremely high data rates involved and the potentially large number of transmitters present in a given location. These factors outline the need for rapid fingerprinting and identification methods that go beyond the traditional hand-engineered approaches. In this study, we investigate the use of machine learning (ML) strategies to the classification and identification problems, and the use of wavelets to reduce the amount of data required. Four different ML strategies are evaluated: deep neural nets (DNN), convolutional neural nets (CNN), support vector machines (SVM), and multi-stage training (MST) using accelerated Levenberg-Marquardt (A-LM) updates. The A-LM MST method preconditioned by wavelets was by far the most accurate, achieving 100% classification accuracy of transmitters, as tested using data originating from 12 different transmitters. We discuss strategies for extension of MST to a much larger number of transmitters.
By informing accurate performance (e.g., capacity), health state management plays a significant role in safeguarding battery and its powered system. While most current approaches are primarily based on data-driven methods, lacking in-depth analysis o f battery performance degradation mechanism may discount their performances. To fill in the research gap about data-driven battery performance degradation analysis, an invariant learning based method is proposed to investigate whether the battery performance degradation follows a fixed behavior. First, to unfold the hidden dynamics of cycling battery data, measurements are reconstructed in phase subspace. Next, a novel multi-stage division strategy is put forward to judge the existent of multiple degradation behaviors. Then the whole aging procedure is sequentially divided into several segments, among which cycling data with consistent degradation speed are assigned in the same stage. Simulations on a well-know benchmark verify the efficacy of the proposed multi-stages identification strategy. The proposed method not only enables insights into degradation mechanism from data perspective, but also will be helpful to related topics, such as stage of health.
With the growing demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while simultaneously improving network performance and addressing the increased connectivity demand. A lthough Network Function Virtualization (NFV) has been identified as a solution, several challenges must be addressed to ensure its feasibility. In this paper, we address the Virtual Network Function (VNF) placement problem by developing a machine learning decision tree model that learns from the effective placement of the various VNF instances forming a Service Function Chain (SFC). The model takes several performance-related features from the network as an input and selects the placement of the various VNF instances on network servers with the objective of minimizing the delay between dependent VNF instances. The benefits of using machine learning are realized by moving away from a complex mathematical modelling of the system and towards a data-based understanding of the system. Using the Evolved Packet Core (EPC) as a use case, we evaluate our model on different data center networks and compare it to the BACON algorithm in terms of the delay between interconnected components and the total delay across the SFC. Furthermore, a time complexity analysis is performed to show the effectiveness of the model in NFV applications.
152 - Liang Liu , Shuowen Zhang 2020
Recently, integrating the communication and sensing functions into a common network has attracted a great amount of attention. This paper considers the advanced signal processing techniques for enabling the radar to sense the environment via the comm unication signals. Since the technologies of orthogonal frequency division multiplexing (OFDM) and multiple-input multiple-output (MIMO) are widely used in the legacy cellular systems, this paper proposes a two-stage signal processing approach for radar sensing in an MIMO-OFDM system, where the scattered channels caused by various targets are estimated in the first stage, and the location information of the targets is then extracted from their scattered channels in the second stage. Specifically, based on the observations that radar sensing is similar to multi-path communication in the sense that different targets scatter the signal sent by the radar transmitter to the radar receiver with various delay, and that the number of scatters is limited, we show that the OFDM-based channel training approach together with the compressed sensing technique can be utilized to estimate the scattered channels efficiently in Stage I. Moreover, to tackle the challenge arising from range resolution for sensing the location of closely spaced targets, we show that the MIMO radar technique can be leveraged in Stage II such that the radar has sufficient spatial samples to even detect the targets in close proximity based on their scattered channels. Last, numerical examples are provided to show the effectiveness of our proposed sensing approach which merely relies on the existing MIMO-OFDM communication techniques.
118 - Xinan Wang , Yishen Wang , Di Shi 2019
With the increasing complexity of modern power systems, conventional dynamic load modeling with ZIP and induction motors (ZIP + IM) is no longer adequate to address the current load characteristic transitions. In recent years, the WECC composite load model (WECC CLM) has shown to effectively capture the dynamic load responses over traditional load models in various stability studies and contingency analyses. However, a detailed WECC CLM model typically has a high degree of complexity, with over one hundred parameters, and no systematic approach to identifying and calibrating these parameters. Enabled by the wide deployment of PMUs and advanced deep learning algorithms, proposed here is a double deep Q-learning network (DDQN)-based, two-stage load modeling framework for the WECC CLM. This two-stage method decomposes the complicated WECC CLM for more efficient identification and does not require explicit model details. In the first stage, the DDQN agent determines an accurate load composition. In the second stage, the parameters of the WECC CLM are selected from a group of Monte-Carlo simulations. The set of selected load parameters is expected to best approximate the true transient responses. The proposed framework is verified using an IEEE 39-bus test system on commercial simulation platforms.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا