Do you want to publish a course? Click here

A Mechanical Screwing Tool for 2-Finger Parallel Grippers -- Design, Optimization, and Manipulation Policies

55   0   0.0 ( 0 )
 Added by Weiwei Wan
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper develops a mechanical tool as well as its manipulation policies for 2-finger parallel robotic grippers. It primarily focuses on a mechanism that converts the gripping motion of 2-finger parallel grippers into a continuous rotation to realize tasks like fastening screws. The essential structure of the tool comprises a Scissor-Like Element (SLE) mechanism and a double-ratchet mechanism. They together convert repeated linear motion into continuous rotating motion. At the joints of the SLE mechanism, elastic elements are attached to provide resisting force for holding the tool as well as for producing torque output when a gripper releases the tool. The tool is entirely mechanical, allowing robots to use the tool without any peripherals and power supply. The paper presents the details of the tool design, optimizes its dimensions and effective stroke lengths, and studies the contacts and forces to achieve stable grasping and screwing. Besides the design, the paper develops manipulation policies for the tool. The policies include visual recognition, picking-up and manipulation, and exchanging tooltips. The developed tool produces clockwise rotation at the front end and counter-clockwise rotation at the back end. Various tooltips can be installed at both two ends. Robots may employ the developed manipulation policies to exchange the tooltips and rotating directions following the needs of specific fastening or loosening tasks. Robots can also reorient the tool using pick-and-place or handover, and move the tool to work poses using the policies. The designed tool, together with the developed manipulation policies, are analyzed and verified in several real-world applications. The tool is small, cordless, convenient, and has good robustness and adaptability.

rate research

Read More

Manipulation in cluttered environments like homes requires stable grasps, precise placement and robustness against external contact. We present the Soft-Bubble gripper system with a highly compliant gripping surface and dense-geometry visuotactile sensing, capable of multiple kinds of tactile perception. We first present various mechanical design advances and a fabrication technique to deposit custom patterns to the internal surface of the sensor that enable tracking of shear-induced displacement of the manipuland. The depth maps output by the internal imaging sensor are used in an in-hand proximity pose estimation framework -- the method better captures distances to corners or edges on the manipuland geometry. We also extend our previous work on tactile classification and integrate the system within a robust manipulation pipeline for cluttered home environments. The capabilities of the proposed system are demonstrated through robust execution multiple real-world manipulation tasks. A video of the system in action can be found here: [https://youtu.be/G_wBsbQyBfc].
Dexterous manipulation is a challenging and important problem in robotics. While data-driven methods are a promising approach, current benchmarks require simulation or extensive engineering support due to the sample inefficiency of popular methods. We present benchmarks for the TriFinger system, an open-source robotic platform for dexterous manipulation and the focus of the 2020 Real Robot Challenge. The benchmarked methods, which were successful in the challenge, can be generally described as structured policies, as they combine elements of classical robotics and modern policy optimization. This inclusion of inductive biases facilitates sample efficiency, interpretability, reliability and high performance. The key aspects of this benchmarking is validation of the baselines across both simulation and the real system, thorough ablation study over the core features of each solution, and a retrospective analysis of the challenge as a manipulation benchmark. The code and demo videos for this work can be found on our website (https://sites.google.com/view/benchmark-rrc).
This paper explores the problem of autonomous, in-hand regrasping--the problem of moving from an initial grasp on an object to a desired grasp using the dexterity of a robots fingers. We propose a planner for this problem which alternates between finger gaiting, and in-grasp manipulation. Finger gaiting enables the robot to move a single finger to a new contact location on the object, while the remaining fingers stably hold the object. In-grasp manipulation moves the object to a new pose relative to the robots palm, while maintaining the contact locations between the hand and object. Given the objects geometry (as a mesh), the hands kinematic structure, and the initial and desired grasps, we plan a sequence of finger gaits and object reposing actions to reach the desired grasp without dropping the object. We propose an optimization based approach and report in-hand regrasping plans for 5 objects over 5 in-hand regrasp goals each. The plans generated by our planner are collision free and guarantee kinematic feasibility.
We present a generalized grasping algorithm that uses point clouds (i.e. a group of points and their respective surface normals) to discover grasp pose solutions for multiple grasp types, executed by a mechanical gripper, in near real-time. The algorithm introduces two ideas: 1) a histogram of finger contact normals is used to represent a grasp shape to guide a gripper orientation search in a histogram of object(s) surface normals, and 2) voxel grid representations of gripper and object(s) are cross-correlated to match finger contact points, i.e. grasp size, to discover a grasp pose. Constraints, such as collisions with neighbouring objects, are optionally incorporated in the cross-correlation computation. We show via simulations and experiments that 1) grasp poses for three grasp types can be found in near real-time, 2) grasp pose solutions are consistent with respect to voxel resolution changes for both partial and complete point cloud scans, and 3) a planned grasp is executed with a mechanical gripper.
280 - Zengyi Qin , Kuan Fang , Yuke Zhu 2019
We aim to develop an algorithm for robots to manipulate novel objects as tools for completing different task goals. An efficient and informative representation would facilitate the effectiveness and generalization of such algorithms. For this purpose, we present KETO, a framework of learning keypoint representations of tool-based manipulation. For each task, a set of task-specific keypoints is jointly predicted from 3D point clouds of the tool object by a deep neural network. These keypoints offer a concise and informative description of the object to determine grasps and subsequent manipulation actions. The model is learned from self-supervised robot interactions in the task environment without the need for explicit human annotations. We evaluate our framework in three manipulation tasks with tool use. Our model consistently outperforms state-of-the-art methods in terms of task success rates. Qualitative results of keypoint prediction and tool generation are shown to visualize the learned representations.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا