ترغب بنشر مسار تعليمي؟ اضغط هنا

POMP++: Pomcp-based Active Visual Search in unknown indoor environments

71   0   0.0 ( 0 )
 نشر من قبل Francesco Giuliari
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we focus on the problem of learning online an optimal policy for Active Visual Search (AVS) of objects in unknown indoor environments. We propose POMP++, a planning strategy that introduces a novel formulation on top of the classic Partially Observable Monte Carlo Planning (POMCP) framework, to allow training-free online policy learning in unknown environments. We present a new belief reinvigoration strategy which allows to use POMCP with a dynamically growing state space to address the online generation of the floor map. We evaluate our method on two public benchmark datasets, AVD that is acquired by real robotic platforms and Habitat ObjectNav that is rendered from real 3D scene scans, achieving the best success rate with an improvement of >10% over the state-of-the-art methods.



قيم البحث

اقرأ أيضاً

133 - Qin Shi , Xiaowei Cui , Wei Li 2019
Navigation applications relying on the Global Navigation Satellite System (GNSS) are limited in indoor environments and GNSS-denied outdoor terrains such as dense urban or forests. In this paper, we present a novel accurate, robust and low-cost GNSS- independent navigation system, which is composed of a monocular camera and Ultra-wideband (UWB) transceivers. Visual techniques have gained excellent results when computing the incremental motion of the sensor, and UWB methods have proved to provide promising localization accuracy due to the high time resolution of the UWB ranging signals. However, the monocular visual techniques with scale ambiguity are not suitable for applications requiring metric results, and UWB methods assume that the positions of the UWB transceiver anchor are pre-calibrated and known, thus precluding their application in unknown and challenging environments. To this end, we advocate leveraging the monocular camera and UWB to create a map of visual features and UWB anchors. We propose a visual-UWB Simultaneous Localization and Mapping (SLAM) algorithm which tightly combines visual and UWB measurements to form a joint non-linear optimization problem on Lie-Manifold. The 6 Degrees of Freedom (DoF) state of the vehicles and the map are estimated by minimizing the UWB ranging errors and landmark reprojection errors. Our navigation system starts with an exploratory task which performs the real-time visual-UWB SLAM to obtain the global map, then the navigation task by reusing this global map. The tasks can be performed by different vehicles in terms of equipped sensors and payload capability in a heterogeneous team. We validate our system on the public datasets, achieving typical centimeter accuracy and 0.1% scale error.
We present an active visual search model for finding objects in unknown environments. The proposed algorithm guides the robot towards the sought object using the relevant stimuli provided by the visual sensors. Existing search strategies are either p urely reactive or use simplified sensor models that do not exploit all the visual information available. In this paper, we propose a new model that actively extracts visual information via visual attention techniques and, in conjunction with a non-myopic decision-making algorithm, leads the robot to search more relevant areas of the environment. The attention module couples both top-down and bottom-up attention models enabling the robot to search regions with higher importance first. The proposed algorithm is evaluated on a mobile robot platform in a 3D simulated environment. The results indicate that the use of visual attention significantly improves search, but the degree of improvement depends on the nature of the task and the complexity of the environment. In our experiments, we found that performance enhancements of up to 42% in structured and 38% in highly unstructured cluttered environments can be achieved using visual attention mechanisms.
Real-world autonomous vehicles often operate in a priori unknown environments. Since most of these systems are safety-critical, it is important to ensure they operate safely in the face of environment uncertainty, such as unseen obstacles. Current sa fety analysis tools enable autonomous systems to reason about safety given full information about the state of the environment a priori. However, these tools do not scale well to scenarios where the environment is being sensed in real time, such as during navigation tasks. In this work, we propose a novel, real-time safety analysis method based on Hamilton-Jacobi reachability that provides strong safety guarantees despite environment uncertainty. Our safety method is planner-agnostic and provides guarantees for a variety of mapping sensors. We demonstrate our approach in simulation and in hardware to provide safety guarantees around a state-of-the-art vision-based, learning-based planner.
Active Search and Tracking for search and rescue missions or collaborative mobile robotics relies on the actuation of a sensing platform to detect and localize a target. In this paper we focus on visually detecting a radio-emitting target with an aer ial robot equipped with a radio receiver and a camera. Visual-based tracking provides high accuracy, but the directionality of the sensing domain may require long search times before detecting the target. Conversely, radio signals have larger coverage, but lower tracking accuracy. Thus, we design a Recursive Bayesian Estimation scheme that uses camera observations to refine radio measurements. To regulate the camera pose, we design an optimal controller whose cost function is built upon a probabilistic map. Theoretical results support the proposed algorithm, while numerical analyses show higher robustness and efficiency with respect to visual and radio-only baselines.
This paper presents an algorithmic framework for the distributed on-line source seeking, termed as DoSS, with a multi-robot system in an unknown dynamical environment. Our algorithm, building on a novel concept called dummy confidence upper bound (D- UCB), integrates both estimation of the unknown environment and task planning for the multiple robots simultaneously, and as a result, drives the team of robots to a steady state in which multiple sources of interest are located. Unlike the standard UCB algorithm in the context of multi-armed bandits, the introduction of D-UCB significantly reduces the computational complexity in solving subproblems of the multi-robot task planning. This also enables our DoSS algorithm to be implementable in a distributed on-line manner. The performance of the algorithm is theoretically guaranteed by showing a sub-linear upper bound of the cumulative regret. Numerical results on a real-world methane emission seeking problem are also provided to demonstrate the effectiveness of the proposed algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا