ترغب بنشر مسار تعليمي؟ اضغط هنا

Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation

186   0   0.0 ( 0 )
 نشر من قبل Sunny Katyara
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Humans in contrast to robots are excellent in performing fine manipulation tasks owing to their remarkable dexterity and sensorimotor organization. Enabling robots to acquire such capabilities, necessitates a framework that not only replicates the human behaviour but also integrates the multi-sensory information for autonomous object interaction. To address such limitations, this research proposes to augment the previously developed kernelized synergies framework with visual perception to automatically adapt to the unknown objects. The kernelized synergies, inspired from humans, retain the same reduced subspace for object grasping and manipulation. To detect object in the scene, a simplified perception pipeline is used that leverages the RANSAC algorithm with Euclidean clustering and SVM for object segmentation and recognition respectively. Further, the comparative analysis of kernelized synergies with other state of art approaches is made to confirm their flexibility and effectiveness on the robotic manipulation tasks. The experiments conducted on the robot hand confirm the robustness of modified kernelized synergies framework against the uncertainties related to the perception of environment.

قيم البحث

اقرأ أيضاً

Manipulation in contrast to grasping is a trajectorial task that needs to use dexterous hands. Improving the dexterity of robot hands, increases the controller complexity and thus requires to use the concept of postural synergies. Inspired from postu ral synergies, this research proposes a new framework called kernelized synergies that focuses on the re-usability of the same subspace for precision grasping and dexterous manipulation. In this work, the computed subspace of postural synergies; parameterized by probabilistic movement primitives, is treated with kernel to preserve its grasping and manipulation characteristics and allows its reuse for new objects. The grasp stability of the proposed framework is assessed with a force closure quality index. For performance evaluation, the proposed framework is tested on two different simulated robot hand models using the Syngrasp toolbox and experimentally, four complex grasping and manipulation tasks are performed and reported. The results confirm the hand agnostic approach of the proposed framework and its generalization to distinct objects irrespective of their shape and size.
Handling non-rigid objects using robot hands necessities a framework that does not only incorporate human-level dexterity and cognition but also the multi-sensory information and system dynamics for robust and fine interactions. In this research, our previously developed kernelized synergies framework, inspired from human behaviour on reusing same subspace for grasping and manipulation, is augmented with visuo-tactile perception for autonomous and flexible adaptation to unknown objects. To detect objects and estimate their poses, a simplified visual pipeline using RANSAC algorithm with Euclidean clustering and SVM classifier is exploited. To modulate interaction efforts while grasping and manipulating non-rigid objects, the tactile feedback using T40S shokac chip sensor, generating 3D force information, is incorporated. Moreover, different kernel functions are examined in the kernelized synergies framework, to evaluate its performance and potential against task reproducibility, execution, generalization and synergistic re-usability. Experiments performed with robot arm-hand system validates the capability and usability of upgraded framework on stably grasping and dexterously manipulating the non-rigid objects.
Despite the success of reinforcement learning methods, they have yet to have their breakthrough moment when applied to a broad range of robotic manipulation tasks. This is partly due to the fact that reinforcement learning algorithms are notoriously difficult and time consuming to train, which is exacerbated when training from images rather than full-state inputs. As humans perform manipulation tasks, our eyes closely monitor every step of the process with our gaze focusing sequentially on the objects being manipulated. With this in mind, we present our Attention-driven Robotic Manipulation (ARM) algorithm, which is a general manipulation algorithm that can be applied to a range of sparse-rewarded tasks, given only a small number of demonstrations. ARM splits the complex task of manipulation into a 3 stage pipeline: (1) a Q-attention agent extracts interesting pixel locations from RGB and point cloud inputs, (2) a next-best pose agent that accepts crops from the Q-attention agent and outputs poses, and (3) a control agent that takes the goal pose and outputs joint actions. We show that current learning algorithms fail on a range of RLBench tasks, whilst ARM is successful.
Collecting and automatically obtaining reward signals from real robotic visual data for the purposes of training reinforcement learning algorithms can be quite challenging and time-consuming. Methods for utilizing unlabeled data can have a huge poten tial to further accelerate robotic learning. We consider here the problem of performing manipulation tasks from pixels. In such tasks, choosing an appropriate state representation is crucial for planning and control. This is even more relevant with real images where noise, occlusions and resolution affect the accuracy and reliability of state estimation. In this work, we learn a latent state representation implicitly with deep reinforcement learning in simulation, and then adapt it to the real domain using unlabeled real robot data. We propose to do so by optimizing sequence-based self supervised objectives. These exploit the temporal nature of robot experience, and can be common in both the simulated and real domains, without assuming any alignment of underlying states in simulated and unlabeled real images. We propose Contrastive Forward Dynamics loss, which combines dynamics model learning with time-contrastive techniques. The learned state representation that results from our methods can be used to robustly solve a manipulation task in simulation and to successfully transfer the learned skill on a real system. We demonstrate the effectiveness of our approaches by training a vision-based reinforcement learning agent for cube stacking. Agents trained with our method, using only 5 hours of unlabeled real robot data for adaptation, shows a clear improvement over domain randomization, and standard visual domain adaptation techniques for sim-to-real transfer.
The flapping-wing aerial vehicle (FWAV) is a new type of flying robot that mimics the flight mode of birds and insects. However, FWAVs have their special characteristics of less load capacity and short endurance time, so that most existing systems of ground target localization are not suitable for them. In this paper, a vision-based target localization algorithm is proposed for FWAVs based on a generic camera model. Since sensors exist measurement error and the camera exists jitter and motion blur during flight, Gaussian noises are introduced in the simulation experiment, and then a first-order low-pass filter is used to stabilize the localization values. Moreover, in order to verify the feasibility and accuracy of the target localization algorithm, we design a set of simulation experiments where various noises are added. From the simulation results, it is found that the target localization algorithm has a good performance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا