ترغب بنشر مسار تعليمي؟ اضغط هنا

TransSC: Transformer-based Shape Completion for Grasp Evaluation

75   0   0.0 ( 0 )
 نشر من قبل Hongzhuo Liang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Currently, robotic grasping methods based on sparse partial point clouds have attained a great grasping performance on various objects while they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust shape completion model (TransSC). This model has a transformer-based encoder to explore more point-wise features and a manifold-based decoder to exploit more object details using a partial point cloud as input. Quantitative experiments verify the effectiveness of the proposed shape completion network and demonstrate it outperforms existing methods. Besides, TransSC is integrated into a grasp evaluation network to generate a set of grasp candidates. The simulation experiment shows that TransSC improves the grasping generation result compared to the existing shape completion baselines. Furthermore, our robotic experiment shows that with TransSC the robot is more successful in grasping objects that are randomly placed on a support surface.



قيم البحث

اقرأ أيضاً

This work provides an architecture to enable robotic grasp planning via shape completion. Shape completion is accomplished through the use of a 3D convolutional neural network (CNN). The network is trained on our own new open source dataset of over 4 40,000 3D exemplars captured from varying viewpoints. At runtime, a 2.5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object. Runtime shape completion is very rapid because most of the computational costs of shape completion are borne during offline training. We explore how the quality of completions vary based on several factors. These include whether or not the object being completed existed in the training data and how many object models were used to train the network. We also look at the ability of the network to generalize to novel objects allowing the system to complete previously unseen objects at runtime. Finally, experimentation is done both in simulation and on actual robotic hardware to explore the relationship between completion quality and the utility of the completed mesh model for grasping.
We present pneumatic shape-shifting fingers to enable a simple parallel-jaw gripper for different manipulation modalities. By changing the finger geometry, the gripper effectively changes the contact type between the fingers and an object to facilita te distinct manipulation primitives. In this paper, we demonstrate the development and application of shape-shifting fingers to reorient and grasp cylindrical objects. The shape of the fingers changes based on the air pressure inside them and attains two distinct geometric forms at high and low pressure values. In our implementation, the finger shape switches between a wedge-shaped geometry and V-shaped geometry at high and low pressure, respectively. Using the wedge-shaped geometry, the fingers provide a point contact on a cylindrical object to pivot it to a vertical pose under the effect of gravity. By changing to V-shaped geometry, the fingers localize the object in the vertical pose and securely hold it. Experimental results show that the smooth transition between the two contact types allows a robot with a simple gripper to reorient a cylindrical object lying horizontally on a ground and to grasp it in a vertical pose.
Performing a grasp is a pivotal capability for a robotic gripper. We propose a new evaluation approach of grasping stability via constructing a model of grasping stiffness based on the theory of contact mechanics. First, the mathematical models are b uilt to explore soft contact and the general grasp stiffness between a finger and an object. Next, the grasping stiffness matrix is constructed to reflect the normal, tangential and torsion stiffness coefficients. Finally, we design two grasping cases to verify the proposed measurement criterion of grasping stability by comparing different grasping configurations. Specifically, a standard grasping index is used and compared with the minimum eigenvalue index of the constructed grasping stiffness we built. The comparison result reveals a similar tendency between them for measuring the grasping stability and thus, validates the proposed approach.
Reliable robotic grasping in unstructured environments is a crucial but challenging task. The main problem is to generate the optimal grasp of novel objects from partial noisy observations. This paper presents an end-to-end grasp detection network ta king one single-view point cloud as input to tackle the problem. Our network includes three stages: Score Network (SN), Grasp Region Network (GRN), and Refine Network (RN). Specifically, SN regresses point grasp confidence and selects positive points with high confidence. Then GRN conducts grasp proposal prediction on the selected positive points. RN generates more accurate grasps by refining proposals predicted by GRN. To further improve the performance, we propose a grasp anchor mechanism, in which grasp anchors with assigned gripper orientations are introduced to generate grasp proposals. Experiments demonstrate that REGNet achieves a success rate of 79.34% and a completion rate of 96% in real-world clutter, which significantly outperforms several state-of-the-art point-cloud based methods, including GPD, PointNetGPD, and S4G. The code is available at https://github.com/zhaobinglei/REGNet_for_3D_Grasping.
Robotic grasp detection is a fundamental capability for intelligent manipulation in unstructured environments. Previous work mainly employed visual and tactile fusion to achieve stable grasp, while, the whole process depending heavily on regrasping, which wastes much time to regulate and evaluate. We propose a novel way to improve robotic grasping: by using learned tactile knowledge, a robot can achieve a stable grasp from an image. First, we construct a prior tactile knowledge learning framework with novel grasp quality metric which is determined by measuring its resistance to external perturbations. Second, we propose a multi-phases Bayesian Grasp architecture to generate stable grasp configurations through a single RGB image based on prior tactile knowledge. Results show that this framework can classify the outcome of grasps with an average accuracy of 86% on known objects and 79% on novel objects. The prior tactile knowledge improves the successful rate of 55% over traditional vision-based strategies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا