Do you want to publish a course? Click here

NeRP: Neural Rearrangement Planning for Unknown Objects

305   0   0.0 ( 0 )
 Added by Ahmed Qureshi
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Robots will be expected to manipulate a wide variety of objects in complex and arbitrary ways as they become more widely used in human environments. As such, the rearrangement of objects has been noted to be an important benchmark for AI capabilities in recent years. We propose NeRP (Neural Rearrangement Planning), a deep learning based approach for multi-step neural object rearrangement planning which works with never-before-seen objects, that is trained on simulation data, and generalizes to the real world. We compare NeRP to several naive and model-based baselines, demonstrating that our approach is measurably better and can efficiently arrange unseen objects in fewer steps and with less planning time. Finally, we demonstrate it on several challenging rearrangement problems in the real world.

rate research

Read More

We present a strategy for designing and building very general robot manipulation systems involving the integration of a general-purpose task-and-motion planner with engineered and learned perception modules that estimate properties and affordances of unknown objects. Such systems are closed-loop policies that map from RGB images, depth images, and robot joint encoder measurements to robot joint position commands. We show that following this strategy a task-and-motion planner can be used to plan intelligent behaviors even in the absence of a priori knowledge regarding the set of manipulable objects, their geometries, and their affordances. We explore several different ways of implementing such perceptual modules for segmentation, property detection, shape estimation, and grasp generation. We show how these modules are integrated within the PDDLStream task and motion planning framework. Finally, we demonstrate that this strategy can enable a single system to perform a wide variety of real-world multi-step manipulation tasks, generalizing over a broad class of objects, object arrangements, and goals, without any prior knowledge of the environment and without re-training.
Robotic manipulation of unknown objects is an important field of research. Practical applications occur in many real-world settings where robots need to interact with an unknown environment. We tackle the problem of reactive grasping by proposing a method for unknown object tracking, grasp point sampling and dynamic trajectory planning. Our object tracking method combines Siamese Networks with an Iterative Closest Point approach for pointcloud registration into a method for 6-DoF unknown object tracking. The method does not require further training and is robust to noise and occlusion. We propose a robotic manipulation system, which is able to grasp a wide variety of formerly unseen objects and is robust against object perturbations and inferior grasping points.
Robotic assembly planning has the potential to profoundly change how buildings can be designed and created. It enables architects to explicitly account for the assembly process already during the design phase, and enables efficient building methods that profit from the robots different capabilities. Previous work has addressed planning of robot assembly sequences and identifying the feasibility of architectural designs. This paper extends previous work by enabling assembly planning with large, heterogeneous teams of robots. We present a scalable planning system which enables parallelization of complex task and motion planning problems by iteratively solving smaller sub-problems. Combining optimization methods to solve for manipulation constraints with a sampling-based bi-directional space-time path planner enables us to plan cooperative multi-robot manipulation with unknown arrival-times. Thus, our solver allows for completing sub-problems and tasks with differing timescales and synchronizes them effectively. We demonstrate the approach on multiple case-studies and on two long-horizon building assembly scenarios to show the robustness and scalability of our algorithm.
Motion planning and obstacle avoidance is a key challenge in robotics applications. While previous work succeeds to provide excellent solutions for known environments, sensor-based motion planning in new and dynamic environments remains difficult. In this work we address sensor-based motion planning from a learning perspective. Motivated by recent advances in visual recognition, we argue the importance of learning appropriate representations for motion planning. We propose a new obstacle representation based on the PointNet architecture and train it jointly with policies for obstacle avoidance. We experimentally evaluate our approach for rigid body motion planning in challenging environments and demonstrate significant improvements of the state of the art in terms of accuracy and efficiency.
This letter addresses the 3D coverage path planning (CPP) problem for terrain reconstruction of unknown obstacle rich environments. Due to sensing limitations, the proposed method, called CT-CPP, performs layered scanning of the 3D region to collect terrain data, where the traveling sequence is optimized using the concept of a coverage tree (CT). A modified TSP-based tree traversal strategy is proposed. The CT-CPP method is validated on a high-fidelity underwater simulator and the results are evaluated in comparison to an existing terrain following CPP method (TF-CPP). The CT-CPP with TSP optimizer yields significant improvements in trajectory length, energy consumption, and reconstruction error.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا