ترغب بنشر مسار تعليمي؟ اضغط هنا

Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge

60   0   0.0 ( 0 )
 نشر من قبل Norton Kelly-Boxall Mr
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present the grasping system and design approach behind Cartman, the winning entrant in the 2017 Amazon Robotics Challenge. We investigate the design processes leading up to the final iteration of the system and describe the emergent solution by comparing it with key robotics design aspects. Following our experience, we propose a new design aspect, precision vs. redundancy, that should be considered alongside the previously proposed design aspects of modularity vs. integration, generality vs. assumptions, computation vs. embodiment and planning vs. feedback. We present the grasping system behind Cartman, the winning robot in the 2017 Amazon Robotics Challenge. The system makes strong use of redundancy in design by implementing complimentary tools, a suction gripper and a parallel gripper. This multi-modal end-effector is combined with three grasp synthesis algorithms to accommodate the range of objects provided by Amazon during the challenge. We provide a detailed system description and an evaluation of its performance before discussing the broader nature of the system with respect to the key aspects of robotic design as initially proposed by the winners of the first Amazon Picking Challenge. To address the principal nature of our grasping system and the reason for its success, we propose an additional robotic design aspect `precision vs. redundancy. The full design of our robotic system, including the end-effector, is open sourced and available at http://juxi.net/projects/AmazonRoboticsChallenge/



قيم البحث

اقرأ أيضاً

Objective: In this work we address limitations in state-of-the-art ultrasound robots by designing and integrating a novel soft robotic system for ultrasound imaging. It employs the inherent qualities of soft fluidic actuators to establish safe, adapt able interaction between ultrasound probe and patient. Methods: We acquire clinical data to determine the movement ranges and force levels required in prenatal foetal ultrasound imaging and design the soft robotic end-effector accordingly. We verify its mechanical characteristics, derive and validate a kinetostatic model and demonstrate controllability and imaging capabilities on an ultrasound phantom. Results: The soft robot exhibits the desired stiffness characteristics and is able to reach 100% of the required workspace when no external force is present, and 95% of the workspace when considering its compliance. The model can accurately predict the end-effector pose with a mean error of 1.18+/-0.29mm in position and 0.92+/-0.47deg in orientation. The derived controller is, with an average position error of 0.39mm, able to track a target pose efficiently without and with externally applied loads. Ultrasound images acquired with the system are of equally good quality compared to a manual sonographer scan. Conclusion: The system is able to withstand loads commonly applied during foetal ultrasound scans and remains controllable with a motion range similar to manual scanning. Significance: The proposed soft robot presents a safe, cost-effective solution to offloading sonographers in day-to-day scanning routines. The design and modelling paradigms are greatly generalizable and particularly suitable for designing soft robots for physical interaction tasks.
The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation. This paper presents the design of our custom-built, cost-effective, Cart esian robot system Cartman, which won first place in the competition finals by stowing 14 (out of 16) and picking all 9 items in 27 minutes, scoring a total of 272 points. We highlight our experience-centred design methodology and key aspects of our system that contributed to our competitiveness. We believe these aspects are crucial to building robust and effective robotic systems.
This work provides an architecture that incorporates depth and tactile information to create rich and accurate 3D models useful for robotic manipulation tasks. This is accomplished through the use of a 3D convolutional neural network (CNN). Offline, the network is provided with both depth and tactile information and trained to predict the objects geometry, thus filling in regions of occlusion. At runtime, the network is provided a partial view of an object. Tactile information is acquired to augment the captured depth information. The network can then reason about the objects geometry by utilizing both the collected tactile and depth information. We demonstrate that even small amounts of additional tactile information can be incredibly helpful in reasoning about object geometry. This is particularly true when information from depth alone fails to produce an accurate geometric prediction. Our method is benchmarked against and outperforms other visual-tactile approaches to general geometric reasoning. We also provide experimental results comparing grasping success with our method.
This work reports on developing a deep learning-based contact estimator for legged robots that bypasses the need for physical contact sensors and takes multi-modal proprioceptive sensory data from joint encoders, kinematics, and an inertial measureme nt unit as input. Unlike vision-based state estimators, proprioceptive state estimators are agnostic to perceptually degraded situations such as dark or foggy scenes. For legged robots, reliable kinematics and contact data are necessary to develop a proprioceptive state estimator. While some robots are equipped with dedicated contact sensors or springs to detect contact, some robots do not have dedicated contact sensors, and the addition of such sensors is non-trivial without redesigning the hardware. The trained deep network can accurately estimate contacts on different terrains and robot gaits and is deployed along a contact-aided invariant extended Kalman filter to generate odometry trajectories. The filter performs comparably to a state-of-the-art visual SLAM system.
The Amazon Picking Challenge (APC), held alongside the International Conference on Robotics and Automation in May 2015 in Seattle, challenged roboticists from academia and industry to demonstrate fully automated solutions to the problem of picking ob jects from shelves in a warehouse fulfillment scenario. Packing density, object variability, speed, and reliability are the main complexities of the task. The picking challenge serves both as a motivation and an instrument to focus research efforts on a specific manipulation problem. In this document, we describe Team MITs approach to the competition, including design considerations, contributions, and performance, and we compile the lessons learned. We also describe what we think are the main remaining challenges.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا