ﻻ يوجد ملخص باللغة العربية
This paper addresses a novel architecture for person-following robots using active search. The proposed system can be applied in real-time to general mobile robots for learning features of a human, detecting and tracking, and finally navigating towards that person. To succeed at person-following, perception, planning, and robot behavior need to be integrated properly. Toward this end, an active target searching capability, including prediction and navigation toward vantage locations for finding human targets, is proposed. The proposed capability aims at improving the robustness and efficiency for tracking and following people under dynamic conditions such as crowded environments. A multi-modal sensor information approach including fusing an RGB-D sensor and a laser scanner, is pursued to robustly track and identify human targets. Bayesian filtering for keeping track of human and a regression algorithm to predict the trajectory of people are investigated. In order to make the robot autonomous, the proposed framework relies on a behavior-tree structure. Using Toyota Human Support Robot (HSR), real-time experiments demonstrate that the proposed architecture can generate fast, efficient person-following behaviors.
In this work, we consider the problem of searching people in an unconstrained environment, with natural language descriptions. Specifically, we study how to systematically design an algorithm to effectively acquire descriptions from humans. An algori
Most works on person re-identification (ReID) take advantage of large backbone networks such as ResNet, which are designed for image classification instead of ReID, for feature extraction. However, these backbones may not be computationally efficient
Selection of appropriate tools and use of them when performing daily tasks is a critical function for introducing robots for domestic applications. In previous studies, however, adaptability to target objects was limited, making it difficult to accor
We present an active visual search model for finding objects in unknown environments. The proposed algorithm guides the robot towards the sought object using the relevant stimuli provided by the visual sensors. Existing search strategies are either p
It is essential in many applications to impose a scalable coordinated motion control on a large group of mobile robots, which is efficient in tasks requiring repetitive execution, such as environmental monitoring. In this paper, we design a guiding v