No Arabic abstract
In Human-Robot Cooperation (HRC), the robot cooperates with humans to accomplish the task together. Existing approaches assume the human has a specific goal during the cooperation, and the robot infers and acts toward it. However, in real-world environments, a human usually only has a general goal (e.g., general direction or area in motion planning) at the beginning of the cooperation, which needs to be clarified to a specific goal (e.g., an exact position) during cooperation. The specification process is interactive and dynamic, which depends on the environment and the partners behavior. The robot that does not consider the goal specification process may cause frustration to the human partner, elongate the time to come to an agreement, and compromise or fail team performance. We present the Evolutionary Value Learning (EVL) approach, which uses a State-based Multivariate Bayesian Inference method to model the dynamics of the goal specification process in HRC. EVL can actively enhance the process of goal specification and cooperation formation. This enables the robot to simultaneously help the human specify the goal and learn a cooperative policy in a Deep Reinforcement Learning (DRL) manner. In a dynamic ball balancing task with real human subjects, the robot equipped with EVL outperforms existing methods with faster goal specification processes and better team performance.
In this paper we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on (i) an in-the-loop decision making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and (ii) the representation level, integrating a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.
Designing robotic tasks for co-manipulation necessitates to exploit not only proprioceptive but also exteroceptive information for improved safety and autonomy. Following such instinct, this research proposes to formulate intuitive robotic tasks following human viewpoint by incorporating visuo-tactile perception. The visual data using depth cameras surveils and determines the object dimensions and human intentions while the tactile sensing ensures to maintain the desired contact to avoid slippage. Experiment performed on robot platform with human assistance under industrial settings validates the performance and applicability of proposed intuitive task formulation.
In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: Our approach learns structural representations of human-human (and human-object-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions. We introduce the representation of social affordance and propose a generative model for its weakly supervised learning from human demonstration videos. Our approach discovers critical steps (i.e., latent sub-events) in an interaction and the typical motion associated with them, learning what body-parts should be involved and how. The experimental results demonstrate that our Markov Chain Monte Carlo (MCMC) based learning algorithm automatically discovers semantically meaningful interactive affordance from RGB-D videos, which allows us to generate appropriate full body motion for an agent.
This paper proposes a novel integrated dynamic method based on Behavior Trees for planning and allocating tasks in mixed human robot teams, suitable for manufacturing environments. The Behavior Tree formulation allows encoding a single job as a compound of different tasks with temporal and logic constraints. In this way, instead of the well-studied offline centralized optimization problem, the role allocation problem is solved with multiple simplified online optimization sub-problem, without complex and cross-schedule task dependencies. These sub-problems are defined as Mixed-Integer Linear Programs, that, according to the worker-actions related costs and the workers availability, allocate the yet-to-execute tasks among the available workers. To characterize the behavior of the developed method, we opted to perform different simulation experiments in which the results of the action-worker allocation and computational complexity are evaluated. The obtained results, due to the nature of the algorithm and to the possibility of simulating the agents behavior, should describe well also how the algorithm performs in real experiments.
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations. An effective robot assistant must learn to handle diverse human behaviors shown in the demonstrations and be robust when the humans adjust their strategies during online task execution. Our method co-optimizes a human policy and a robot policy in an interactive learning process: the human policy learns to generate diverse and plausible collaborative behaviors from demonstrations while the robot policy learns to assist by estimating the unobserved latent strategy of its human collaborator. Across a 2D strategy game, a human-robot handover task, and a multi-step collaborative manipulation task, our method outperforms the alternatives in both simulated evaluations and when executing the tasks with a real human operator in-the-loop. Supplementary materials and videos at https://sites.google.com/view/co-gail-web/home