ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian Grasp: Robotic visual stable grasp based on prior tactile knowledge

132   0   0.0 ( 0 )
 نشر من قبل Teng Xue
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Robotic grasp detection is a fundamental capability for intelligent manipulation in unstructured environments. Previous work mainly employed visual and tactile fusion to achieve stable grasp, while, the whole process depending heavily on regrasping, which wastes much time to regulate and evaluate. We propose a novel way to improve robotic grasping: by using learned tactile knowledge, a robot can achieve a stable grasp from an image. First, we construct a prior tactile knowledge learning framework with novel grasp quality metric which is determined by measuring its resistance to external perturbations. Second, we propose a multi-phases Bayesian Grasp architecture to generate stable grasp configurations through a single RGB image based on prior tactile knowledge. Results show that this framework can classify the outcome of grasps with an average accuracy of 86% on known objects and 79% on novel objects. The prior tactile knowledge improves the successful rate of 55% over traditional vision-based strategies.

قيم البحث

اقرأ أيضاً

The reliability of grasp detection for target objects in complex scenes is a challenging task and a critical problem that needs to be solved urgently in practical application. At present, the grasp detection location comes from searching the feature space of the whole image. However, the cluttered background information in the image impairs the accuracy of grasping detection. In this paper, a robotic grasp detection algorithm named MASK-GD is proposed, which provides a feasible solution to this problem. MASK is a segmented image that only contains the pixels of the target object. MASK-GD for grasp detection only uses MASK features rather than the features of the entire image in the scene. It has two stages: the first stage is to provide the MASK of the target object as the input image, and the second stage is a grasp detector based on the MASK feature. Experimental results demonstrate that MASK-GDs performance is comparable with state-of-the-art grasp detection algorithms on Cornell Datasets and Jacquard Dataset. In the meantime, MASK-GD performs much better in complex scenes.
Grasp detection with consideration of the affiliations between grasps and their owner in object overlapping scenes is a necessary and challenging task for the practical use of the robotic grasping approach. In this paper, a robotic grasp detection al gorithm named ROI-GD is proposed to provide a feasible solution to this problem based on Region of Interest (ROI), which is the region proposal for objects. ROI-GD uses features from ROIs to detect grasps instead of the whole scene. It has two stages: the first stage is to provide ROIs in the input image and the second-stage is the grasp detector based on ROI features. We also contribute a multi-object grasp dataset, which is much larger than Cornell Grasp Dataset, by labeling Visual Manipulation Relationship Dataset. Experimental results demonstrate that ROI-GD performs much better in object overlapping scenes and at the meantime, remains comparable with state-of-the-art grasp detection algorithms on Cornell Grasp Dataset and Jacquard Dataset. Robotic experiments demonstrate that ROI-GD can help robots grasp the target in single-object and multi-object scenes with the overall success rates of 92.5% and 83.8% respectively.
We present a novel approach to robotic grasp planning using both a learned grasp proposal network and a learned 3D shape reconstruction network. Our system generates 6-DOF grasps from a single RGB-D image of the target object, which is provided as in put to both networks. By using the geometric reconstruction to refine the the candidate grasp produced by the grasp proposal network, our system is able to accurately grasp both known and unknown objects, even when the grasp location on the object is not visible in the input image. This paper presents the network architectures, training procedures, and grasp refinement method that comprise our system. Experiments demonstrate the efficacy of our system at grasping both known and unknown objects (91% success rate in a physical robot environment, 84% success rate in a simulated environment). We additionally perform ablation studies that show the benefits of combining a learned grasp proposal with geometric reconstruction for grasping, and also show that our system outperforms several baselines in a grasping task.
Rotational displacement about the grasping point is a common grasp failure when an object is grasped at a location away from its center of gravity. Tactile sensors with soft surfaces, such as GelSight sensors, can detect the rotation patterns on the contacting surfaces when the object rotates. In this work, we propose a model-based algorithm that detects those rotational patterns and measures rotational displacement using the GelSight sensor. We also integrate the rotation detection feedback into a closed-loop regrasping framework, which detects the rotational failure of grasp in an early stage and drives the robot to a stable grasp pose. We validate our proposed rotation detection algorithm and grasp-regrasp system on self-collected dataset and online experiments to show how our approach accurately detects the rotation and increases grasp stability.
After a grasp has been planned, if the object orientation changes, the initial grasp may but not always have to be modified to accommodate the orientation change. For example, rotation of a cylinder by any amount around its centerline does not change its geometric shape relative to the grasper. Objects that can be approximated to solids of revolution or contain other geometric symmetries are prevalent in everyday life, and this information can be employed to improve the efficiency of existing grasp planning models. This paper experimentally investigates change in human-planned grasps under varied object orientations. With 13,440 recorded human grasps, our results indicate that during pick-and-place task of ordinary objects, stable grasps can be achieved with a small subset of grasp types, and the wrist-related parameters follow normal distribution. Furthermore, we show this knowledge can allow faster convergence of grasp planning algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا