Do you want to publish a course? Click here

An Intelligent Prediction System for Mobile Source Localization Using Time Delay Measurements

194   0   0.0 ( 0 )
 Added by Xiaoping Wu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we introduce an intelligent prediction system for mobile source localization in industrial Internet of things. The position and velocity of mobile source are jointly predicted by using Time Delay (TD) measurements in the intelligent system. To predict the position and velocity, the Relaxed Semi-Definite Programming (RSDP) algorithm is firstly designed by dropping the rank-one constraint. However, dropping the rank-one constraint leads to produce a suboptimal solution. To improve the performance, we further put forward a Penalty Function Semi-Definite Programming (PF-SDP) method to obtain the rank-one solution of the optimization problem by introducing the penalty terms. Then an Adaptive Penalty Function Semi-Definite Programming (APF-SDP) algorithm is also proposed to avoid the excessive penalty by adaptively choosing the penalty coefficient. We conduct experiments in both a simulation environment and a real system to demonstrate the effectiveness of the proposed method. The results have demonstrated that the proposed intelligent APF-SDP algorithm outperforms the PF-SDP in terms of the position and velocity estimation whether the noise level is large or not.



rate research

Read More

Space information networks (SIN) are facing an ever-increasing thirst for high-speed and high-capacity seamless data transmission due to the integration of ground, air, and space communications. However, this imposes a new paradigm on the architecture design of the integrated SIN. Recently, reconfigurable intelligent surfaces (RISs) and mobile edge computing (MEC) are the most promising techniques, conceived to improve communication and computation capability by reconfiguring the wireless propagation environment and offloading. Hence, converging RISs and MEC in SIN is becoming an effort to reap the double benefits of computation and communication. In this article, we propose an RIS-assisted collaborative MEC architecture for SIN and discuss its implementation. Then we present its potential benefits, major challenges, and feasible applications. Subsequently, we study different cases to evaluate the system data rate and latency. Finally, we conclude with a list of open issues in this research area.
Undoubtedly, Mobile Augmented Reality (MAR) applications for 5G and Beyond wireless networks are witnessing a notable attention recently. However, they require significant computational and storage resources at the end device and/or the network via Edge Cloud (EC) support. In this work, a MAR service is considered under the lenses of microservices where MAR service components can be decomposed and anchored at different locations ranging from the end device to different ECs in order to optimize the overall service and network efficiency. To this end, we propose a mobility aware MAR service decomposition using a Long Short Term Memory (LSTM) deep neural network to provide efficient pro-active decision making in real-time. More specifically, the LSTM deep neural network is trained with optimal solutions derived from a mathematical programming formulation in an offline manner. Then, decision making at the inference stage is used to optimize service decomposition of MAR services. A wide set of numerical investigations reveal that the mobility aware LSTM deep neural network manage to outperform recently proposed schemes in terms of both decision making quality as well as computational time.
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter lambda_i. Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
Internet of Things (IoT) is considered as the enabling platform for a variety of promising applications, such as smart transportation and smart city, where massive devices are interconnected for data collection and processing. These IoT applications pose a high demand on storage and computing capacity, while the IoT devices are usually resource-constrained. As a potential solution, mobile edge computing (MEC) deploys cloud resources in the proximity of IoT devices so that their requests can be better served locally. In this work, we investigate computation offloading in a dynamic MEC system with multiple edge servers, where computational tasks with various requirements are dynamically generated by IoT devices and offloaded to MEC servers in a time-varying operating environment (e.g., channel condition changes over time). The objective of this work is to maximize the completed tasks before their respective deadlines and minimize energy consumption. To this end, we propose an end-to-end Deep Reinforcement Learning (DRL) approach to select the best edge server for offloading and allocate the optimal computational resource such that the expected long-term utility is maximized. The simulation results are provided to demonstrate that the proposed approach outperforms the existing methods.
A deep learning approach based on big data is proposed to locate broadband acoustic sources using a single hydrophone in ocean waveguides with uncertain bottom parameters. Several 50-layer residual neural networks, trained on a huge number of sound field replicas generated by an acoustic propagation model, are used to handle the bottom uncertainty in source localization. A two-step training strategy is presented to improve the training of the deep models. First, the range is discretized in a coarse (5 km) grid. Subsequently, the source range within the selected interval and source depth are discretized on a finer (0.1 km and 2 m) grid. The deep learning methods were demonstrated for simulated magnitude-only multi-frequency data in uncertain environments. Experimental data from the China Yellow Sea also validated the approach.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا