Do you want to publish a course? Click here

A review of smartphones based indoor positioning: challenges and applications

281   0   0.0 ( 0 )
 Added by Khuong An Nguyen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The continual proliferation of mobile devices has encouraged much effort in using the smartphones for indoor positioning. This article is dedicated to review the most recent and interesting smartphones based indoor navigation systems, ranging from electromagnetic to inertia to visible light ones, with an emphasis on their unique challenges and potential real-world applications. A taxonomy of smartphones sensors will be introduced, which serves as the basis to categorise different positioning systems for reviewing. A set of criteria to be used for the evaluation purpose will be devised. For each sensor category, the most recent, interesting and practical systems will be examined, with detailed discussion on the open research questions for the academics, and the practicality for the potential clients.



rate research

Read More

The accuracy of smartphone-based positioning methods using WiFi usually suffers from ranging errors caused by non-line-of-sight (NLOS) conditions. Previous research usually exploits several statistical features from a long time series (hundreds of samples) of WiFi received signal strength (RSS) or WiFi round-trip time (RTT) to achieve a high identification accuracy. However, the long time series or large sample size attributes to high power and time consumption in data collection for both training and testing. This will also undoubtedly be detrimental to user experience as the waiting time of getting enough samples is quite long. Therefore, this paper proposes a new real-time NLOS/LOS identification method for smartphone-based indoor positioning system using WiFi RTT and RSS. Based on our extensive analysis of RSS and RTT features, a machine learning-based method using random forest was chosen and developed to separate the samples for NLOS/LOS conditions. Experiments in different environments show that our method achieves a discrimination accuracy of about 94% with a sample size of 10. Considering the theoretically shortest WiFi ranging interval of 100ms of the RTT-enabled smartphones, our algorithm is able to provide the shortest latency of 1s to get the testing result among all of the state-of-art methods.
Facial affect analysis (FAA) using visual signals is important in human-computer interaction. Early methods focus on extracting appearance and geometry features associated with human affects, while ignoring the latent semantic information among individual facial changes, leading to limited performance and generalization. Recent work attempts to establish a graph-based representation to model these semantic relationships and develop frameworks to leverage them for various FAA tasks. In this paper, we provide a comprehensive review of graph-based FAA, including the evolution of algorithms and their applications. First, the FAA background knowledge is introduced, especially on the role of the graph. We then discuss approaches that are widely used for graph-based affective representation in literature and show a trend towards graph construction. For the relational reasoning in graph-based FAA, existing studies are categorized according to their usage of traditional methods or deep models, with a special emphasis on the latest graph neural networks. Performance comparisons of the state-of-the-art graph-based FAA methods are also summarized. Finally, we discuss the challenges and potential directions. As far as we know, this is the first survey of graph-based FAA methods. Our findings can serve as a reference for future research in this field.
With the rapid development of the Internet of Things (IoT), Indoor Positioning System (IPS) has attracted significant interest in academic research. Ultra-Wideband (UWB) is an emerging technology that can be employed for IPS as it offers centimetre-level accuracy. However, the UWB system still faces several technical challenges in practice, one of which is Non-Line-of-Sight (NLoS) signal propagation. Several machine learning approaches have been applied for the NLoS component identification. However, when the data contains a very small amount of NLoS components it becomes very difficult for existing algorithms to classify them. This paper focuses on employing an anomaly detection approach based on Gaussian Distribution (GD) and Generalized Gaussian Distribution (GGD) algorithms to detect and identify the NLoS components. The simulation results indicate that the proposed approach can provide a robust NLoS component identification which improves the NLoS signal classification accuracy which results in significant improvement in the UWB positioning system.
Terahertz spectrum is being researched upon to provide ultra-high throughput radio links for indoor applications, e.g., virtual reality (VR), etc. as well as outdoor applications, e.g., backhaul links, etc. This paper investigates a monopulse-based beam tracking approach for limited mobility users relying on sparse massive multiple input multiple output (MIMO) wireless channels. Owing to the sparsity, beamforming is realized using digitally-controlled radio frequency (RF) / intermediate-frequency (IF) phase shifters with constant amplitude constraint for transmit power compliance. A monopulse-based beam tracking technique, using received signal strength indi-cation (RSSI) is adopted to avoid feedback overheads for obvious reasons of efficacy and resource savings. The Matlab implementation of the beam tracking algorithm is also reported. This Matlab implementation has been kept as general purpose as possible using functions wherein the channel, beamforming codebooks, monopulse comparator, etc. can easily be updated for specific requirements and with minimum code amendments.
Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others. However, continuously executing the entire DNN on the mobile device can quickly deplete its battery. Although task offloading to edge servers may decrease the mobile devices computational burden, erratic patterns in channel quality, network and edge server load can lead to a significant delay in task execution. Recently,approaches based on split computing (SC) have been proposed, where the DNN is split into a head and a tail model, executed respectively on the mobile device and on the edge server. Ultimately, this may reduce bandwidth usage as well as energy consumption. Another approach, called early exiting (EE), trains models to present multiple exits earlier in the architecture, each providing increasingly higher target accuracy. Therefore, the trade-off between accuracy and delay can be tuned according to the current conditions or application demands. In this paper, we provide a comprehensive survey of the state of the art in SC and EE strategies, by presenting a comparison of the most relevant approaches. We conclude the paper by providing a set of compelling research challenges.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا