Do you want to publish a course? Click here

Mobile Manipulator for Autonomous Localization, Grasping and Precise Placement of Construction Material in a Semi-structured Environment

126   0   0.0 ( 0 )
 Added by Petr \\v{S}tibinger
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Mobile manipulators have the potential to revolutionize modern agriculture, logistics and manufacturing. In this work, we present the design of a ground-based mobile manipulator for automated structure assembly. The proposed system is capable of autonomous localization, grasping, transportation and deployment of construction material in a semi-structured environment. Special effort was put into making the system invariant to lighting changes, and not reliant on external positioning systems. Therefore, the presented system is self-contained and capable of operating in outdoor and indoor conditions alike. Finally, we present means to extend the perceptive radius of the vehicle by using it in cooperation with an autonomous drone, which provides aerial reconnaissance. Performance of the proposed system has been evaluated in a series of experiments conducted in real-world conditions.

rate research

Read More

Path planning algorithms for unmanned aerial or ground vehicles, in many surveillance applications, rely on Global Positioning System (GPS) information for localization. However, disruption of GPS signals, by intention or otherwise, can render these plans and algorithms ineffective. This article provides a way of addressing this issue by utilizing stationary landmarks to aid localization in such GPS-disrupted or GPS-denied environment. In particular, given the vehicles path, we formulate a landmark-placement problem and present algorithms to place the minimum number of landmarks while satisfying the localization, sensing, and collision-avoidance constraints. The performance of such a placement is also evaluated via extensive simulations on ground robots.
In this paper, we present a planner that plans a sequence of base positions for a mobile manipulator to efficiently and robustly collect objects stored in distinct trays. We achieve high efficiency by exploring the common areas where a mobile manipulator can grasp objects stored in multiple trays simultaneously and move the mobile manipulator to the common areas to reduce the time needed for moving the mobile base. We ensure robustness by optimizing the base position with the best clearance to positioning uncertainty so that a mobile manipulator can complete the task even if there is a certain deviation from the planned base positions. Besides, considering different styles of object placement in the tray, we analyze feasible schemes for dynamically updating the base positions based on either the remaining objects or the target objects to be picked in one round of the tasks. In the experiment part, we examine our planner on various scenarios, including different object placement: (1) Regularly placed toy objects; (2) Randomly placed industrial parts; and different schemes for online execution: (1) Apply globally static base positions; (2) Dynamically update the base positions. The experiment results demonstrate the efficiency, robustness and feasibility of the proposed method.
Autonomous robotic grasping plays an important role in intelligent robotics. However, how to help the robot grasp specific objects in object stacking scenes is still an open problem, because there are two main challenges for autonomous robots: (1)it is a comprehensive task to know what and how to grasp; (2)it is hard to deal with the situations in which the target is hidden or covered by other objects. In this paper, we propose a multi-task convolutional neural network for autonomous robotic grasping, which can help the robot find the target, make the plan for grasping and finally grasp the target step by step in object stacking scenes. We integrate vision-based robotic grasping detection and visual manipulation relationship reasoning in one single deep network and build the autonomous robotic grasping system. Experimental results demonstrate that with our model, Baxter robot can autonomously grasp the target with a success rate of 90.6%, 71.9% and 59.4% in object cluttered scenes, familiar stacking scenes and complex stacking scenes respectively.
In this paper, we address efficiently and robustly collecting objects stored in different trays using a mobile manipulator. A resolution complete method, based on precomputed reachability database, is proposed to explore collision-free inverse kinematics (IK) solutions and then a resolution complete set of feasible base positions can be determined. This method approximates a set of representative IK solutions that are especially helpful when solving IK and checking collision are treated separately. For real world applications, we take into account the base positioning uncertainty and plan a sequence of base positions that reduce the number of necessary base movements for collecting the target objects, the base sequence is robust in that the mobile manipulator is able to complete the part-supply task even there is certain deviation from the planned base positions. Our experiments demonstrate both the efficiency compared to regular base sequence and the feasibility in real world applications.
Developing personal robots that can perform a diverse range of manipulation tasks in unstructured environments necessitates solving several challenges for robotic grasping systems. We take a step towards this broader goal by presenting the first RL-based system, to our knowledge, for a mobile manipulator that can (a) achieve targeted grasping generalizing to unseen target objects, (b) learn complex grasping strategies for cluttered scenes with occluded objects, and (c) perform active vision through its movable wrist camera to better locate objects. The system is informed of the desired target object in the form of a single, arbitrary-pose RGB image of that object, enabling the system to generalize to unseen objects without retraining. To achieve such a system, we combine several advances in deep reinforcement learning and present a large-scale distributed training system using synchronous SGD that seamlessly scales to multi-node, multi-GPU infrastructure to make rapid prototyping easier. We train and evaluate our system in a simulated environment, identify key components for improving performance, analyze its behaviors, and transfer to a real-world setup.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا