No Arabic abstract
Consider the problem of planning collision-free motion of $n$ objects in the plane movable through contact with a robot that can autonomously translate in the plane and that can move a maximum of $m leq n$ objects simultaneously. This represents the abstract formulation of a manipulation planning problem that is proven to be decidable in this paper. The tools used for proving decidability of this simplified manipulation planning problem are, in fact, general enough to handle the decidability problem for the wider class of systems characterized by a stratified configuration space. These include, for example, problems of legged and multi-contact locomotion, bi-manual manipulation. In addition, the described approach does not restrict the dynamics of the manipulation system to be considered.
Attempts to achieve robotic Within-Hand-Manipulation (WIHM) generally utilize either high-DOF robotic hands with elaborate sensing apparatus or multi-arm robotic systems. In prior work we presented a simple robot hand with variable friction robot fingers, which allow a low-complexity approach to within-hand object translation and rotation, though this manipulation was limited to planar actions. In this work we extend the capabilities of this system to 3D manipulation with a novel region-based WIHM planning algorithm and utilizing extrinsic contacts. The ability to modulate finger friction enhances extrinsic dexterity for three-dimensional WIHM, and allows us to operate in the quasi-static level. The region-based planner automatically generates 3D manipulation sequences with a modified A* formulation that navigates the contact regions between the fingers and the object surface to reach desired regions. Central to this method is a set of object-motion primitives (i.e. within-hand sliding, rotation and pivoting), which can easily be achieved via changing contact friction. A wide range of goal regions can be achieved via this approach, which is demonstrated via real robot experiments following a standardized in-hand manipulation benchmarking protocol.
Multi-stage forceful manipulation tasks, such as twisting a nut on a bolt, require reasoning over interlocking constraints over discrete as well as continuous choices. The robot must choose a sequence of discrete actions, or strategy, such as whether to pick up an object, and the continuous parameters of each of those actions, such as how to grasp the object. In forceful manipulation tasks, the force requirements substantially impact the choices of both strategy and parameters. To enable planning and executing forceful manipulation, we augment an existing task and motion planner with controllers that exert wrenches and constraints that explicitly consider torque and frictional limits. In two domains, opening a childproof bottle and twisting a nut, we demonstrate how the system considers a combinatorial number of strategies and how choosing actions that are robust to parameter variations impacts the choice of strategy.
The increasing presence of robots alongside humans, such as in human-robot teams in manufacturing, gives rise to research questions about the kind of behaviors people prefer in their robot counterparts. We term actions that support interaction by reducing future interference with others as supportive robot actions and investigate their utility in a co-located manipulation scenario. We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones when they reduce future goal-conflicts. Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task. We implemented these modes on a physical robot in a user study where a human and a robot perform object placement on a shared table. Our results show that a supportive robot was perceived as a more favorable coworker by the human and also reduced interference with the human in the more difficult of two scenarios. However, it also took longer to complete the task highlighting an interesting trade-off between task-efficiency and human-preference that needs to be considered before designing robot behavior for close-proximity manipulation scenarios.
A defining feature of sampling-based motion planning is the reliance on an implicit representation of the state space, which is enabled by a set of probing samples. Traditionally, these samples are drawn either probabilistically or deterministically to uniformly cover the state space. Yet, the motion of many robotic systems is often restricted to small regions of the state space, due to, for example, differential constraints or collision-avoidance constraints. To accelerate the planning process, it is thus desirable to devise non-uniform sampling strategies that favor sampling in those regions where an optimal solution might lie. This paper proposes a methodology for non-uniform sampling, whereby a sampling distribution is learned from demonstrations, and then used to bias sampling. The sampling distribution is computed through a conditional variational autoencoder, allowing sample generation from the latent space conditioned on the specific planning problem. This methodology is general, can be used in combination with any sampling-based planner, and can effectively exploit the underlying structure of a planning problem while maintaining the theoretical guarantees of sampling-based approaches. Specifically, on several planning problems, the proposed methodology is shown to effectively learn representations for the relevant regions of the state space, resulting in an order of magnitude improvement in terms of success rate and convergence to the optimal cost.
Motion planning for multi-jointed robots is challenging. Due to the inherent complexity of the problem, most existing works decompose motion planning as easier subproblems. However, because of the inconsistent performance metrics, only sub-optimal solution can be found by decomposition based approaches. This paper presents an optimal control based approach to address the path planning and trajectory planning subproblems simultaneously. Unlike similar works which either ignore robot dynamics or require long computation time, an efficient numerical method for trajectory optimization is presented in this paper for motion planning involving complicated robot dynamics. The efficiency and effectiveness of the proposed approach is shown by numerical results. Experimental results are used to show the feasibility of the presented planning algorithm.