Do you want to publish a course? Click here

Learning Implicit Sampling Distributions for Motion Planning

140   0   0.0 ( 0 )
 Added by Clark Zhang
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Sampling-based motion planners have experienced much success due to their ability to efficiently and evenly explore the state space. However, for many tasks, it may be more efficient to not uniformly explore the state space, especially when there is prior information about its structure. Previous methods have attempted to modify the sampling distribution using hand selected heuristics that can work well for specific environments but not universally. In this paper, a policy- search based method is presented as an adaptive way to learn implicit sampling distributions for different environments. It utilizes information from past searches in similar environments to generate better distributions in novel environments, thus reducing overall computational cost. Our method can be incor- porated with a variety of sampling-based planners to improve performance. Our approach is validated on a number of tasks, including a 7DOF robot arm, showing marked improvement in number of collision checks as well as number of nodes expanded compared with baseline methods.

rate research

Read More

A defining feature of sampling-based motion planning is the reliance on an implicit representation of the state space, which is enabled by a set of probing samples. Traditionally, these samples are drawn either probabilistically or deterministically to uniformly cover the state space. Yet, the motion of many robotic systems is often restricted to small regions of the state space, due to, for example, differential constraints or collision-avoidance constraints. To accelerate the planning process, it is thus desirable to devise non-uniform sampling strategies that favor sampling in those regions where an optimal solution might lie. This paper proposes a methodology for non-uniform sampling, whereby a sampling distribution is learned from demonstrations, and then used to bias sampling. The sampling distribution is computed through a conditional variational autoencoder, allowing sample generation from the latent space conditioned on the specific planning problem. This methodology is general, can be used in combination with any sampling-based planner, and can effectively exploit the underlying structure of a planning problem while maintaining the theoretical guarantees of sampling-based approaches. Specifically, on several planning problems, the proposed methodology is shown to effectively learn representations for the relevant regions of the state space, resulting in an order of magnitude improvement in terms of success rate and convergence to the optimal cost.
Earlier work has shown that reusing experience from prior motion planning problems can improve the efficiency of similar, future motion planning queries. However, for robots with many degrees-of-freedom, these methods exhibit poor generalization across different environments and often require large datasets that are impractical to gather. We present SPARK and FLAME , two experience-based frameworks for sampling-based planning applicable to complex manipulators in 3 D environments. Both combine samplers associated with features from a workspace decomposition into a global biased sampling distribution. SPARK decomposes the environment based on exact geometry while FLAME is more general, and uses an octree-based decomposition obtained from sensor data. We demonstrate the effectiveness of SPARK and FLAME on a Fetch robot tasked with challenging pick-and-place manipulation problems. Our approaches can be trained incrementally and significantly improve performance with only a handful of examples, generalizing better over diverse tasks and environments as compared to prior approaches.
This paper studies the problem of control strategy synthesis for dynamical systems with differential constraints to fulfill a given reachability goal while satisfying a set of safety rules. Particular attention is devoted to goals that become feasible only if a subset of the safety rules are violated. The proposed algorithm computes a control law, that minimizes the level of unsafety while the desired goal is guaranteed to be reached. This problem is motivated by an autonomous car navigating an urban environment while following rules of the road such as always travel in right lane and do not change lanes frequently. Ideas behind sampling based motion-planning algorithms, such as Probabilistic Road Maps (PRMs) and Rapidly-exploring Random Trees (RRTs), are employed to incrementally construct a finite concretization of the dynamics as a durational Kripke structure. In conjunction with this, a weighted finite automaton that captures the safety rules is used in order to find an optimal trajectory that minimizes the violation of safety rules. We prove that the proposed algorithm guarantees asymptotic optimality, i.e., almost-sure convergence to optimal solutions. We present results of simulation experiments and an implementation on an autonomous urban mobility-on-demand system.
We address the problem of planning robot motions in constrained configuration spaces where the constraints change throughout the motion. The problem is formulated as a fixed sequence of intersecting manifolds, which the robot needs to traverse in order to solve the task. We specify a class of sequential motion planning problems that fulfill a particular property of the change in the free configuration space when transitioning between manifolds. For this problem class, we develop the algorithm Planning on Sequenced Manifolds (PSM*) which searches for optimal intersection points between manifolds by using RRT* in an inner loop with a novel steering strategy. We provide a theoretical analysis regarding PSM*s probabilistic completeness and asymptotic optimality. Further, we evaluate its planning performance on multi-robot object transportation tasks. Video: https://youtu.be/Q8kbILTRxfU Code: https://github.com/etpr/sequential-manifold-planning
Motion planning with constraints is an important part of many real-world robotic systems. In this work, we study manifold learning methods to learn such constraints from data. We explore two methods for learning implicit constraint manifolds from data: Variational Autoencoders (VAE), and a new method, Equality Constraint Manifold Neural Network (ECoMaNN). With the aim of incorporating learned constraints into a sampling-based motion planning framework, we evaluate the approaches on their ability to learn representations of constraints from various datasets and on the quality of paths produced during planning.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا