Do you want to publish a course? Click here

Aerial Anchors Positioning for Reliable RSS-Based Outdoor Localization in Urban Environments

153   0   0.0 ( 0 )
 Added by Hazem Sallouha
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

In this letter, the localization of terrestrial nodes when unmanned aerial vehicles (UAVs) are used as base stations is investigated. Particularly, a novel localization scenario based on received signal strength (RSS) from terrestrial nodes is introduced. In contrast to the existing literature, our analysis includes height-dependent path loss exponent and shadowing which results in an optimum UAV altitude for minimum localization error. Furthermore, the Cramer-Rao lower bound is derived for the estimated distance which emphasizes, analytically, the existence of an optimal UAV altitude. Our simulation results show that the localization error is decreased from over 300m when using ground-based anchors to 80m when using UAVs flying at the optimal altitude in an urban scenario.



rate research

Read More

Recent years have witnessed the fast growth in telecommunication (Telco) techniques from 2G to upcoming 5G. Precise outdoor localization is important for Telco operators to manage, operate and optimize Telco networks. Differing from GPS, Telco localization is a technique employed by Telco operators to localize outdoor mobile devices by using measurement report (MR) data. When given MR samples containing noisy signals (e.g., caused by Telco signal interference and attenuation), Telco localization often suffers from high errors. To this end, the main focus of this paper is how to improve Telco localization accuracy via the algorithms to detect and repair outlier positions with high errors. Specifically, we propose a context-aware Telco localization technique, namely RLoc, which consists of three main components: a machine-learning-based localization algorithm, a detection algorithm to find flawed samples, and a repair algorithm to replace outlier localization results by better ones (ideally ground truth positions). Unlike most existing works to detect and repair every flawed MR sample independently, we instead take into account spatio-temporal locality of MR locations and exploit trajectory context to detect and repair flawed positions. Our experiments on the real MR data sets from 2G GSM and 4G LTE Telco networks verify that our work RLoc can greatly improve Telco location accuracy. For example, RLoc on a large 4G MR data set can achieve 32.2 meters of median errors, around 17.4% better than state-of-the-art.
Visual localization and mapping is a crucial capability to address many challenges in mobile robotics. It constitutes a robust, accurate and cost-effective approach for local and global pose estimation within prior maps. Yet, in highly dynamic environments, like crowded city streets, problems arise as major parts of the image can be covered by dynamic objects. Consequently, visual odometry pipelines often diverge and the localization systems malfunction as detected features are not consistent with the precomputed 3D model. In this work, we present an approach to automatically detect dynamic object instances to improve the robustness of vision-based localization and mapping in crowded environments. By training a convolutional neural network model with a combination of synthetic and real-world data, dynamic object instance masks are learned in a semi-supervised way. The real-world data can be collected with a standard camera and requires minimal further post-processing. Our experiments show that a wide range of dynamic objects can be reliably detected using the presented method. Promising performance is demonstrated on our own and also publicly available datasets, which also shows the generalization capabilities of this approach.
Decentralized deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. This letter proposes a vision-based detection and tracking algorithm that enables groups of drones to navigate without communication or visual markers. We employ a convolutional neural network to detect and localize nearby agents onboard the quadcopters in real-time. Rather than manually labeling a dataset, we automatically annotate images to train the neural network using background subtraction by systematically flying a quadcopter in front of a static camera. We use a multi-agent state tracker to estimate the relative positions and velocities of nearby agents, which are subsequently fed to a flocking algorithm for high-level control. The drones are equipped with multiple cameras to provide omnidirectional visual inputs. The camera setup ensures the safety of the flock by avoiding blind spots regardless of the agent configuration. We evaluate the approach with a group of three real quadcopters that are controlled using the proposed vision-based flocking algorithm. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions. The source code, image dataset, and trained detection model are available at https://github.com/lis-epfl/vswarm.
Reliable and accurate localization is crucial for mobile autonomous systems. Pole-like objects, such as traffic signs, poles, lamps, etc., are ideal landmarks for localization in urban environments due to their local distinctiveness and long-term stability. In this paper, we present a novel, accurate, and fast pole extraction approach that runs online and has little computational demands such that this information can be used for a localization system. Our method performs all computations directly on range images generated from 3D LiDAR scans, which avoids processing 3D point cloud explicitly and enables fast pole extraction for each scan. We test the proposed pole extraction and localization approach on different datasets with different LiDAR scanners, weather conditions, routes, and seasonal changes. The experimental results show that our approach outperforms other state-of-the-art approaches, while running online without a GPU. Besides, we release our pole dataset to the public for evaluating the performance of pole extractor, as well as the implementation of our approach.
The accuracy of smartphone-based positioning methods using WiFi usually suffers from ranging errors caused by non-line-of-sight (NLOS) conditions. Previous research usually exploits several statistical features from a long time series (hundreds of samples) of WiFi received signal strength (RSS) or WiFi round-trip time (RTT) to achieve a high identification accuracy. However, the long time series or large sample size attributes to high power and time consumption in data collection for both training and testing. This will also undoubtedly be detrimental to user experience as the waiting time of getting enough samples is quite long. Therefore, this paper proposes a new real-time NLOS/LOS identification method for smartphone-based indoor positioning system using WiFi RTT and RSS. Based on our extensive analysis of RSS and RTT features, a machine learning-based method using random forest was chosen and developed to separate the samples for NLOS/LOS conditions. Experiments in different environments show that our method achieves a discrimination accuracy of about 94% with a sample size of 10. Considering the theoretically shortest WiFi ranging interval of 100ms of the RTT-enabled smartphones, our algorithm is able to provide the shortest latency of 1s to get the testing result among all of the state-of-art methods.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا