No Arabic abstract
For a nonlinear system (e.g. a robot) with its continuous state space trajectories constrained by a linear temporal logic specification, the synthesis of a low-level controller for mission execution often results in a non-convex optimization problem. We devise a new algorithm to solve this type of non-convex problems by formulating a rapidly-exploring random tree of barrier pairs, with each barrier pair composed of a quadratic barrier function and a full state feedback controller. The proposed method employs a rapid-exploring random tree to deal with the non-convex constraints and uses barrier pairs to fulfill the local convex constraints. As such, the method solves control problems fulfilling the required transitions of an automaton in order to satisfy given linear temporal logic constraints. At the same time it synthesizes locally optimal controllers in order to transition between the regions corresponding to the alphabet of the automaton. We demonstrate this new algorithm on a simulation of a two linkage manipulator robot.
We present Kinodynamic RRT*, an incremental sampling-based approach for asymptotically optimal motion planning for robots with linear differential constraints. Our approach extends RRT*, which was introduced for holonomic robots (Karaman et al. 2011), by using a fixed-final-state-free-final-time controller that exactly and optimally connects any pair of states, where the cost function is expressed as a trade-off between the duration of a trajectory and the expended control effort. Our approach generalizes earlier work on extending RRT* to kinodynamic systems, as it guarantees asymptotic optimality for any system with controllable linear dynamics, in state spaces of any dimension. Our approach can be applied to non-linear dynamics as well by using their first-order Taylor approximations. In addition, we show that for the rich subclass of systems with a nilpotent dynamics matrix, closed-form solutions for optimal trajectories can be derived, which keeps the computational overhead of our algorithm compared to traditional RRT* at a minimum. We demonstrate the potential of our approach by computing asymptotically optimal trajectories in three challenging motion planning scenarios: (i) a planar robot with a 4-D state space and double integrator dynamics, (ii) an aerial vehicle with a 10-D state space and linearized quadrotor dynamics, and (iii) a car-like robot with a 5-D state space and non-linear dynamics.
This paper presents a sampling-based method for optimal motion planning in non-holonomic systems in the absence of known cost functions. It uses the principle of learning through experience to deduce the cost-to-go of regions within the workspace. This cost information is used to bias an incremental graph-based search algorithm that produces solution trajectories. Iterative improvement of cost information and search biasing produces solutions that are proven to be asymptotically optimal. The proposed framework builds on incremental Rapidly-exploring Random Trees (RRT) for random sampling-based search and Reinforcement Learning (RL) to learn workspace costs. A series of experiments were performed to evaluate and demonstrate the performance of the proposed method.
We reformulate the signal temporal logic (STL) synthesis problem as a maximum a-posteriori (MAP) inference problem. To this end, we introduce the notion of random STL~(RSTL), which extends deterministic STL with random predicates. This new probabilistic extension naturally leads to a synthesis-as-inference approach. The proposed method allows for differentiable, gradient-based synthesis while extending the class of possible uncertain semantics. We demonstrate that the proposed framework scales well with GPU-acceleration, and present realistic applications of uncertain semantics in robotics that involve target tracking and the use of occupancy grids.
In this extended abstract, we report on ongoing work towards an approximate multimodal optimization algorithm with asymptotic guarantees. Multimodal optimization is the problem of finding all local optimal solutions (modes) to a path optimization problem. This is important to compress path databases, as contingencies for replanning and as source of symbolic representations. Following ideas from Morse theory, we define modes as paths invariant under optimization of a cost functional. We develop a multi-mode estimation algorithm which approximately finds all modes of a given motion optimization problem and asymptotically converges. This is made possible by integrating sparse roadmaps with an existing single-mode optimization algorithm. Initial evaluation results show the multi-mode estimation algorithm as a promising direction to study path spaces from a topological point of view.
This paper introduces a novel motion planning algorithm, incrementally stochastic and accelerated gradient information mixed optimization (iSAGO), for robotic manipulators in a narrow workspace. Primarily, we propose the overall scheme of iSAGO integrating the accelerated and stochastic gradient information for efficient descent in the penalty method. In the stochastic part, we generate the adaptive stochastic moment via the random selection of collision checkboxes, interval time-series, and penalty factor based on Adam to solve the body-obstacle stuck case. Due to the slow convergence of STOMA, we integrate the accelerated gradient and stimulate the descent rate in a Lipschitz constant reestimation framework. Moreover, we introduce the Bayesian tree inference (BTI) method, transforming the whole trajectory optimization (SAGO) into an incremental sub-trajectory optimization (iSAGO) to improve the computational efficiency and success rate. Finally, we demonstrate the key coefficient tuning, benchmark iSAGO against other planners (CHOMP, GPMP2, TrajOpt, STOMP, and RRT-Connect), and implement iSAGO on AUBO-i5 in a storage shelf. The result shows the highest success rate and moderate solving efficiency of iSAGO.