Do you want to publish a course? Click here

Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

84   0   0.0 ( 0 )
 Added by Chen Wang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations. An effective robot assistant must learn to handle diverse human behaviors shown in the demonstrations and be robust when the humans adjust their strategies during online task execution. Our method co-optimizes a human policy and a robot policy in an interactive learning process: the human policy learns to generate diverse and plausible collaborative behaviors from demonstrations while the robot policy learns to assist by estimating the unobserved latent strategy of its human collaborator. Across a 2D strategy game, a human-robot handover task, and a multi-step collaborative manipulation task, our method outperforms the alternatives in both simulated evaluations and when executing the tasks with a real human operator in-the-loop. Supplementary materials and videos at https://sites.google.com/view/co-gail-web/home



rate research

Read More

We present situated live programming for human-robot collaboration, an approach that enables users with limited programming experience to program collaborative applications for human-robot interaction. Allowing end users, such as shop floor workers, to program collaborative robots themselves would make it easy to retask robots from one process to another, facilitating their adoption by small and medium enterprises. Our approach builds on the paradigm of trigger-action programming (TAP) by allowing end users to create rich interactions through simple trigger-action pairings. It enables end users to iteratively create, edit, and refine a reactive robot program while executing partial programs. This live programming approach enables the user to utilize the task space and objects by incrementally specifying situated trigger-action pairs, substantially lowering the barrier to entry for programming or reprogramming robots for collaboration. We instantiate situated live programming in an authoring system where users can create trigger-action programs by annotating an augmented video feed from the robots perspective and assign robot actions to trigger conditions. We evaluated this system in a study where participants (n = 10) developed robot programs for solving collaborative light-manufacturing tasks. Results showed that users with little programming experience were able to program HRC tasks in an interactive fashion and our situated live programming approach further supported individualized strategies and workflows. We conclude by discussing opportunities and limitations of the proposed approach, our system implementation, and our study and discuss a roadmap for expanding this approach to a broader range of tasks and applications.
Human-Robot Collaboration (HRC) is rapidly replacing the traditional application of robotics in the manufacturing industry. Robots and human operators no longer have to perform their tasks in segregated areas and are capable of working in close vicinity and performing hybrid tasks -- performed partially by humans and by robots. We have presented a methodology in an earlier work [16] to promote and facilitate formally modeling HRC systems, which are notoriously safety-critical. Relying on temporal logic modeling capabilities and automated model checking tools, we built a framework to formally model HRC systems and verify the physical safety of human operator against ISO 10218-2 [10] standard. In order to make our proposed formal verification framework more appealing to safety engineers, whom are usually not very fond of formal modeling and verification techniques, we decided to couple our model checking approach with a 3D simulator that demonstrates the potential hazardous situations to the safety engineers in a more transparent way. This paper reports our co-simulation approach, using Morse simulator [4] and Zot model checker [14].
In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: Our approach learns structural representations of human-human (and human-object-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions. We introduce the representation of social affordance and propose a generative model for its weakly supervised learning from human demonstration videos. Our approach discovers critical steps (i.e., latent sub-events) in an interaction and the typical motion associated with them, learning what body-parts should be involved and how. The experimental results demonstrate that our Markov Chain Monte Carlo (MCMC) based learning algorithm automatically discovers semantically meaningful interactive affordance from RGB-D videos, which allows us to generate appropriate full body motion for an agent.
The need to guarantee safety of collaborative robots limits their performance, in particular, their speed and hence cycle time. The standard ISO/TS 15066 defines the Power and Force Limiting operation mode and prescribes force thresholds that a moving robot is allowed to exert on human body parts during impact, along with a simple formula to obtain maximum allowed speed of the robot in the whole workspace. In this work, we measure the forces exerted by two collaborative manipulators (UR10e and KUKA LBR iiwa) moving downward against an impact measuring device. First, we empirically show that the impact forces can vary by more than 100 percent within the robot workspace. The forces are negatively correlated with the distance from the robot base and the height in the workspace. Second, we present a data-driven model, 3D Collision-Force-Map, predicting impact forces from distance, height, and velocity and demonstrate that it can be trained on a limited number of data points. Third, we analyze the force evolution upon impact and find that clamping never occurs for the UR10e. We show that formulas relating robot mass, velocity, and impact forces from ISO/TS 15066 are insufficient -- leading both to significant underestimation and overestimation and thus to unnecessarily long cycle times or even dangerous applications. We propose an empirical method that can be deployed to quickly determine the optimal speed and position where a task can be safely performed with maximum efficiency.
Effective human-robot collaboration (HRC) requires extensive communication among the human and robot teammates, because their actions can potentially produce conflicts, synergies, or both. We develop a novel augmented reality (AR) interface to bridge the communication gap between human and robot teammates. Building on our AR interface, we develop an AR-mediated, negotiation-based (ARN) framework for HRC. We have conducted experiments both in simulation and on real robots in an office environment, where multiple mobile robots work on delivery tasks. The robots could not complete the tasks on their own, but sometimes need help from their human teammate, rendering human-robot collaboration necessary. Results suggest that ARN significantly reduced the human-robot teams task completion time compared to a non-AR baseline approach.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا