ترغب بنشر مسار تعليمي؟ اضغط هنا

ROI-based Robotic Grasp Detection for Object Overlapping Scenes

115   0   0.0 ( 0 )
 نشر من قبل Hanbo Zhang
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Grasp detection with consideration of the affiliations between grasps and their owner in object overlapping scenes is a necessary and challenging task for the practical use of the robotic grasping approach. In this paper, a robotic grasp detection algorithm named ROI-GD is proposed to provide a feasible solution to this problem based on Region of Interest (ROI), which is the region proposal for objects. ROI-GD uses features from ROIs to detect grasps instead of the whole scene. It has two stages: the first stage is to provide ROIs in the input image and the second-stage is the grasp detector based on ROI features. We also contribute a multi-object grasp dataset, which is much larger than Cornell Grasp Dataset, by labeling Visual Manipulation Relationship Dataset. Experimental results demonstrate that ROI-GD performs much better in object overlapping scenes and at the meantime, remains comparable with state-of-the-art grasp detection algorithms on Cornell Grasp Dataset and Jacquard Dataset. Robotic experiments demonstrate that ROI-GD can help robots grasp the target in single-object and multi-object scenes with the overall success rates of 92.5% and 83.8% respectively.



قيم البحث

اقرأ أيضاً

The reliability of grasp detection for target objects in complex scenes is a challenging task and a critical problem that needs to be solved urgently in practical application. At present, the grasp detection location comes from searching the feature space of the whole image. However, the cluttered background information in the image impairs the accuracy of grasping detection. In this paper, a robotic grasp detection algorithm named MASK-GD is proposed, which provides a feasible solution to this problem. MASK is a segmented image that only contains the pixels of the target object. MASK-GD for grasp detection only uses MASK features rather than the features of the entire image in the scene. It has two stages: the first stage is to provide the MASK of the target object as the input image, and the second stage is a grasp detector based on the MASK feature. Experimental results demonstrate that MASK-GDs performance is comparable with state-of-the-art grasp detection algorithms on Cornell Datasets and Jacquard Dataset. In the meantime, MASK-GD performs much better in complex scenes.
Robotic grasp detection is a fundamental capability for intelligent manipulation in unstructured environments. Previous work mainly employed visual and tactile fusion to achieve stable grasp, while, the whole process depending heavily on regrasping, which wastes much time to regulate and evaluate. We propose a novel way to improve robotic grasping: by using learned tactile knowledge, a robot can achieve a stable grasp from an image. First, we construct a prior tactile knowledge learning framework with novel grasp quality metric which is determined by measuring its resistance to external perturbations. Second, we propose a multi-phases Bayesian Grasp architecture to generate stable grasp configurations through a single RGB image based on prior tactile knowledge. Results show that this framework can classify the outcome of grasps with an average accuracy of 86% on known objects and 79% on novel objects. The prior tactile knowledge improves the successful rate of 55% over traditional vision-based strategies.
93 - Wei Wei , Yongkang Luo , Fuyu Li 2021
Object grasping in cluttered scenes is a widely investigated field of robot manipulation. Most of the current works focus on estimating grasp pose from point clouds based on an efficient single-shot grasp detection network. However, due to the lack o f geometry awareness of the local grasping area, it may cause severe collisions and unstable grasp configurations. In this paper, we propose a two-stage grasp pose refinement network which detects grasps globally while fine-tuning low-quality grasps and filtering noisy grasps locally. Furthermore, we extend the 6-DoF grasp with an extra dimension as grasp width which is critical for collisionless grasping in cluttered scenes. It takes a single-view point cloud as input and predicts dense and precise grasp configurations. To enhance the generalization ability, we build a synthetic single-object grasp dataset including 150 commodities of various shapes, and a multi-object cluttered scene dataset including 100k point clouds with robust, dense grasp poses and mask annotations. Experiments conducted on Yumi IRB-1400 Robot demonstrate that the model trained on our dataset performs well in real environments and outperforms previous methods by a large margin.
Autonomous robotic grasping plays an important role in intelligent robotics. However, how to help the robot grasp specific objects in object stacking scenes is still an open problem, because there are two main challenges for autonomous robots: (1)it is a comprehensive task to know what and how to grasp; (2)it is hard to deal with the situations in which the target is hidden or covered by other objects. In this paper, we propose a multi-task convolutional neural network for autonomous robotic grasping, which can help the robot find the target, make the plan for grasping and finally grasp the target step by step in object stacking scenes. We integrate vision-based robotic grasping detection and visual manipulation relationship reasoning in one single deep network and build the autonomous robotic grasping system. Experimental results demonstrate that with our model, Baxter robot can autonomously grasp the target with a success rate of 90.6%, 71.9% and 59.4% in object cluttered scenes, familiar stacking scenes and complex stacking scenes respectively.
Reliable robotic grasping in unstructured environments is a crucial but challenging task. The main problem is to generate the optimal grasp of novel objects from partial noisy observations. This paper presents an end-to-end grasp detection network ta king one single-view point cloud as input to tackle the problem. Our network includes three stages: Score Network (SN), Grasp Region Network (GRN), and Refine Network (RN). Specifically, SN regresses point grasp confidence and selects positive points with high confidence. Then GRN conducts grasp proposal prediction on the selected positive points. RN generates more accurate grasps by refining proposals predicted by GRN. To further improve the performance, we propose a grasp anchor mechanism, in which grasp anchors with assigned gripper orientations are introduced to generate grasp proposals. Experiments demonstrate that REGNet achieves a success rate of 79.34% and a completion rate of 96% in real-world clutter, which significantly outperforms several state-of-the-art point-cloud based methods, including GPD, PointNetGPD, and S4G. The code is available at https://github.com/zhaobinglei/REGNet_for_3D_Grasping.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا