Do you want to publish a course? Click here

A Task-Motion Planning Framework Using Iteratively Deepened AND/OR Graph Networks

64   0   0.0 ( 0 )
 Added by Antony Thomas
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present an approach for Task-Motion Planning (TMP) using Iterative Deepened AND/OR Graph Networks (TMP-IDAN) that uses an AND/OR graph network based novel abstraction for compactly representing the task-level states and actions. While retrieving a target object from clutter, the number of object re-arrangements required to grasp the target is not known ahead of time. To address this challenge, in contrast to traditional AND/OR graph-based planners, we grow the AND/OR graph online until the target grasp is feasible and thereby obtain a network of AND/OR graphs. The AND/OR graph network allows faster computations than traditional task planners. We validate our approach and evaluate its capabilities using a Baxter robot and a state-of-the-art robotics simulator in several challenging non-trivial cluttered table-top scenarios. The experiments show that our approach is readily scalable to increasing number of objects and different degrees of clutter.



rate research

Read More

Robotic planning problems in hybrid state and action spaces can be solved by integrated task and motion planners (TAMP) that handle the complex interaction between motion-level decisions and task-level plan feasibility. TAMP approaches rely on domain-specific symbolic operators to guide the task-level search, making planning efficient. In this work, we formalize and study the problem of operator learning for TAMP. Central to this study is the view that operators define a lossy abstraction of the transition model of a domain. We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system. Experimentally, we provide results in three domains, including long-horizon robotic planning tasks. We find our approach to substantially outperform several baselines, including three graph neural network-based model-free approaches from the recent literature. Video: https://youtu.be/iVfpX9BpBRo Code: https://git.io/JCT0g
We present an integrated Task-Motion Planning (TMP) framework for navigation in large-scale environments. Of late, TMP for manipulation has attracted significant interest resulting in a proliferation of different approaches. In contrast, TMP for navigation has received considerably less attention. Autonomous robots operating in real-world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. In knowledge-intensive domains, on the one hand, a robot has to reason at the highest-level, for example, the objects to procure, the regions to navigate to in order to acquire them; on the other hand, the feasibility of the respective navigation tasks have to be checked at the execution level. This presents a need for motion-planning-aware task planners. In this paper, we discuss a probabilistically complete approach that leverages this task-motion interaction for navigating in large knowledge-intensive domains, returning a plan that is optimal at the task-level. The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology is validated in simulation, in an office environment and its scalability is tested in the larger Willow Garage world. A reasonable comparison with a work that is closest to our approach is also provided. We also demonstrate the adaptability of our approach by considering a building floor navigation domain. Finally, we also discuss the limitations of our approach and put forward suggestions for improvements and future work.
We present an integrated Task-Motion Planning (TMP) framework for navigation in large-scale environment. Autonomous robots operating in real world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. In knowledge intensive domains, on the one hand, a robot has to reason at the highest-level, for example the regions to navigate to; on the other hand, the feasibility of the respective navigation tasks have to be checked at the execution level. This presents a need for motion-planning-aware task planners. We discuss a probabilistically complete approach that leverages this task-motion interaction for navigating in indoor domains, returning a plan that is optimal at the task-level. Furthermore, our framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology is validated with a simulated office environment in Gazebo. In addition, we discuss the limitations and provide suggestions for improvements and future work.
We present an integrated Task-Motion Planning framework for robot navigation in belief space. Autonomous robots operating in real world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. To this end, we propose a framework for integrating belief space reasoning within a hybrid task planner. The expressive power of PDDL+ combined with heuristic-driven semantic attachments performs the propagated and posterior belief estimates while planning. The underlying methodology for the development of the combined hybrid planner is discussed, providing suggestions for improvements and future work. Furthermore we validate key aspects of our approach using a realistic scenario in simulation.
Autonomous robots operating in large knowledgeintensive domains require planning in the discrete (task) space and the continuous (motion) space. In knowledge-intensive domains, on the one hand, robots have to reason at the highestlevel, for example the regions to navigate to or objects to be picked up and their properties; on the other hand, the feasibility of the respective navigation tasks have to be checked at the controller execution level. Moreover, employing multiple robots offer enhanced performance capabilities over a single robot performing the same task. To this end, we present an integrated multi-robot task-motion planning framework for navigation in knowledge-intensive domains. In particular, we consider a distributed multi-robot setting incorporating mutual observations between the robots. The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology and its limitations are discussed, providing suggestions for improvements and future work. We validate key aspects of our approach in simulation.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا