ترغب بنشر مسار تعليمي؟ اضغط هنا

Formulating Intuitive Stack-of-Tasks with Visuo-Tactile Perception for Collaborative Human-Robot Fine Manipulation

269   0   0.0 ( 0 )
 نشر من قبل Sunny Katyara
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Enabling robots to work in close proximity with humans necessitates to employ not only multi-sensory information for coordinated and autonomous interactions but also a control framework that ensures adaptive and flexible collaborative behavior. Such a control framework needs to integrate accuracy and repeatability of robots with cognitive ability and adaptability of humans for co-manipulation. In this regard, an intuitive stack of tasks (iSOT) formulation is proposed, that defines the robots actions based on human ergonomics and task progress. The framework is augmented with visuo-tactile perception for flexible interaction and autonomous adaption. The visual information using depth cameras, monitors and estimates the object pose and human arm gesture while the tactile feedback provides exploration skills for maintaining the desired contact to avoid slippage. Experiments conducted on robot system with human partnership for assembly and disassembly tasks confirm the effectiveness and usability of proposed framework.

قيم البحث

اقرأ أيضاً

Designing robotic tasks for co-manipulation necessitates to exploit not only proprioceptive but also exteroceptive information for improved safety and autonomy. Following such instinct, this research proposes to formulate intuitive robotic tasks foll owing human viewpoint by incorporating visuo-tactile perception. The visual data using depth cameras surveils and determines the object dimensions and human intentions while the tactile sensing ensures to maintain the desired contact to avoid slippage. Experiment performed on robot platform with human assistance under industrial settings validates the performance and applicability of proposed intuitive task formulation.
Handling non-rigid objects using robot hands necessities a framework that does not only incorporate human-level dexterity and cognition but also the multi-sensory information and system dynamics for robust and fine interactions. In this research, our previously developed kernelized synergies framework, inspired from human behaviour on reusing same subspace for grasping and manipulation, is augmented with visuo-tactile perception for autonomous and flexible adaptation to unknown objects. To detect objects and estimate their poses, a simplified visual pipeline using RANSAC algorithm with Euclidean clustering and SVM classifier is exploited. To modulate interaction efforts while grasping and manipulating non-rigid objects, the tactile feedback using T40S shokac chip sensor, generating 3D force information, is incorporated. Moreover, different kernel functions are examined in the kernelized synergies framework, to evaluate its performance and potential against task reproducibility, execution, generalization and synergistic re-usability. Experiments performed with robot arm-hand system validates the capability and usability of upgraded framework on stably grasping and dexterously manipulating the non-rigid objects.
Humans in contrast to robots are excellent in performing fine manipulation tasks owing to their remarkable dexterity and sensorimotor organization. Enabling robots to acquire such capabilities, necessitates a framework that not only replicates the hu man behaviour but also integrates the multi-sensory information for autonomous object interaction. To address such limitations, this research proposes to augment the previously developed kernelized synergies framework with visual perception to automatically adapt to the unknown objects. The kernelized synergies, inspired from humans, retain the same reduced subspace for object grasping and manipulation. To detect object in the scene, a simplified perception pipeline is used that leverages the RANSAC algorithm with Euclidean clustering and SVM for object segmentation and recognition respectively. Further, the comparative analysis of kernelized synergies with other state of art approaches is made to confirm their flexibility and effectiveness on the robotic manipulation tasks. The experiments conducted on the robot hand confirm the robustness of modified kernelized synergies framework against the uncertainties related to the perception of environment.
The presence and coexistence of human operators and collaborative robots in shop-floor environments raises the need for assigning tasks to either operators or robots, or both. Depending on task characteristics, operator capabilities and the involved robot functionalities, it is of the utmost importance to design strategies allowing for the concurrent and/or sequential allocation of tasks related to object manipulation and assembly. In this paper, we extend the textsc{FlexHRC} framework presented in cite{darvish2018flexible} to allow a human operator to interact with multiple, heterogeneous robots at the same time in order to jointly carry out a given task. The extended textsc{FlexHRC} framework leverages a concurrent and sequential task representation framework to allocate tasks to either operators or robots as part of a dynamic collaboration process. In particular, we focus on a use case related to the inspection of product defects, which involves a human operator, a dual-arm Baxter manipulator from Rethink Robotics and a Kuka youBot mobile manipulator.
Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help us decide which actions we should choose and how to perform them. E.g., we can d rive slower if we see that we have bad traction or grasp tighter if an item looks slippery. We believe that this ability also helps robots to enhance their understanding of the environment, and thus enables them to tailor their actions to the situation at hand. We therefore propose a model to estimate the degree of tactile properties from visual perception alone (e.g., the level of slipperiness or roughness). Our method extends a encoder-decoder network, in which the latent variables are visual and tactile features. In contrast to previous works, our method does not require manual labeling, but only RGB images and the corresponding tactile sensor data. All our data is collected with a webcam and uSkin tactile sensor mounted on the end-effector of a Sawyer robot, which strokes the surfaces of 25 different materials. We show that our model generalizes to materials not included in the training data by evaluating the feature space, indicating that it has learned to associate important tactile properties with images.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا