Do you want to publish a course? Click here

Deep Samplable Observation Model for Global Localization and Kidnapping

84   0   0.0 ( 0 )
 Added by Runjian Chen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Global localization and kidnapping are two challenging problems in robot localization. The popular method, Monte Carlo Localization (MCL) addresses the problem by iteratively updating a set of particles with a sampling-weighting loop. Sampling is decisive to the performance of MCL [1]. However, traditional MCL can only sample from a uniform distribution over the state space. Although variants of MCL propose different sampling models, they fail to provide an accurate distribution or generalize across scenes. To better deal with these problems, we present a distribution proposal model, named Deep Samplable Observation Model (DSOM). DSOM takes a map and a 2D laser scan as inputs and outputs a conditional multimodal probability distribution of the pose, making the samples more focusing on the regions with higher likelihood. With such samples, the convergence is expected to be more effective and efficient. Considering that the learning-based sampling model may fail to capture the true pose sometimes, we furthermore propose the Adaptive Mixture MCL (AdaM MCL), which deploys a trusty mechanism to adaptively select updating mode for each particle to tolerate this situation. Equipped with DSOM, AdaM MCL can achieve more accurate estimation, faster convergence and better scalability compared to previous methods in both synthetic and real scenes. Even in real environments with long-term changing, AdaM MCL is able to localize the robot using DSOM trained only by simulation observations from a SLAM map or a blueprint map.



rate research

Read More

Localization is a crucial capability for mobile robots and autonomous cars. In this paper, we address learning an observation model for Monte-Carlo localization using 3D LiDAR data. We propose a novel, neural network-based observation model that computes the expected overlap of two 3D LiDAR scans. The model predicts the overlap and yaw angle offset between the current sensor reading and virtual frames generated from a pre-built map. We integrate this observation model into a Monte-Carlo localization framework and tested it on urban datasets collected with a car in different seasons. The experiments presented in this paper illustrate that our method can reliably localize a vehicle in typical urban environments. We furthermore provide comparisons to a beam-end point and a histogram-based method indicating a superior global localization performance of our method with fewer particles.
135 - Qin Shi , Xiaowei Cui , Sihao Zhao 2019
High-accuracy absolute localization for a team of vehicles is essential when accomplishing various kinds of tasks. As a promising approach, collaborative localization fuses the individual motion measurements and the inter-vehicle measurements to collaboratively estimate the states. In this paper, we focus on the range-only collaborative localization, which specifies the inter-vehicle measurements as inter-vehicle ranging measurements. We first investigate the observability properties of the system and derive that to achieve bounded localization errors, two vehicles are required to remain static like external infrastructures. Under the guide of the observability analysis, we then propose our range-only collaborative localization system which categorize the ground vehicles into two static vehicles and dynamic vehicles. The vehicles are connected utilizing a UWB network that is capable of both producing inter-vehicle ranging measurements and communication. Simulation results validate the observability analysis and demonstrate that collaborative localization is capable of achieving higher accuracy when utilizing the inter-vehicle measurements. Extensive experimental results are performed for a team of 3 and 5 vehicles. The real-world results illustrate that our proposed system enables accurate and real-time estimation of all vehicles absolute poses.
118 - Qin Shi , Xiaowei Cui , Sihao Zhao 2019
The spatiotemporal information plays crucial roles in a multi-agent system (MAS). However, for a highly dynamic and dense MAS in unknown environments, estimating its spatiotemporal states is a difficult problem. In this paper, we present BLAS: a wireless broadcast relative localization and clock synchronization system to address these challenges. Our BLAS system exploits a broadcast architecture, under which a MAS is categorized into parent agents that broadcast wireless packets and child agents that are passive receivers, to reduce the number of required packets among agents for relative localization and clock synchronization. We first propose an asynchronous broadcasting and passively receiving (ABPR) protocol. The protocol schedules the broadcast of parent agents using a distributed time division multiple access (D-TDMA) scheme and delivers inter-agent information used for joint relative localization and clock synchronization. We then present distributed state estimation approaches in parent and child agents that utilize the broadcast inter-agent information for joint estimation of spatiotemporal states. The simulations and real-world experiments based on ultra-wideband (UWB) illustrate that our proposed BLAS cannot only enable accurate, high-frequency and real-time estimation of relative position and clock parameters but also support theoretically an unlimited number of agents.
Indoor localization for autonomous micro aerial vehicles (MAVs) requires specific localization techniques, since the Global Positioning System (GPS) is usually not available. We present an efficient onboard computer vision approach that estimates 2D positions of an MAV in real-time. This global localization system does not suffer from error accumulation over time and uses a $k$-Nearest Neighbors ($k$-NN) algorithm to predict positions based on textons---small characteristic image patches that capture the texture of an environment. A particle filter aggregates the estimates and resolves positional ambiguities. To predict the performance of the approach in a given setting, we developed an evaluation technique that compares environments and identifies critical areas within them. We conducted flight tests to demonstrate the applicability of our approach. The algorithm has a localization accuracy of approximately 0.6 m on a 5 m$times$5 m area at a runtime of 32 ms on board of an MAV. Based on random sampling, its computational effort is scalable to different platforms, trading off speed and accuracy.
Active localization is the problem of generating robot actions that allow it to maximally disambiguate its pose within a reference map. Traditional approaches to this use an information-theoretic criterion for action selection and hand-crafted perceptual models. In this work we propose an end-to-end differentiable method for learning to take informative actions that is trainable entirely in simulation and then transferable to real robot hardware with zero refinement. The system is composed of two modules: a convolutional neural network for perception, and a deep reinforcement learned planning module. We introduce a multi-scale approach to the learned perceptual model since the accuracy needed to perform action selection with reinforcement learning is much less than the accuracy needed for robot control. We demonstrate that the resulting system outperforms using the traditional approach for either perception or planning. We also demonstrate our approaches robustness to different map configurations and other nuisance parameters through the use of domain randomization in training. The code is also compatible with the OpenAI gym framework, as well as the Gazebo simulator.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا