ترغب بنشر مسار تعليمي؟ اضغط هنا

AR-based interaction for safe human-robot collaborative manufacturing

436   0   0.0 ( 0 )
 نشر من قبل Antti Hietanen
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Industrial standards define safety requirements for Human-Robot Collaboration (HRC) in industrial manufacturing. The standards particularly require real-time monitoring and securing of the minimum protective distance between a robot and an operator. In this work, we propose a depth-sensor based model for workspace monitoring and an interactive Augmented Reality (AR) User Interface (UI) for safe HRC. The AR UI is implemented on two different hardware: a projector-mirror setup anda wearable AR gear (HoloLens). We experiment the workspace model and UIs for a realistic diesel motor assembly task. The AR-based interactive UIs provide 21-24% and 57-64% reduction in the task completion and robot idle time, respectively, as compared to a baseline without interaction and workspace sharing. However, subjective evaluations reveal that HoloLens based AR is not yet suitable for industrial manufacturing while the projector-mirror setup shows clear improvements in safety and work ergonomics.



قيم البحث

اقرأ أيضاً

The need to guarantee safety of collaborative robots limits their performance, in particular, their speed and hence cycle time. The standard ISO/TS 15066 defines the Power and Force Limiting operation mode and prescribes force thresholds that a movin g robot is allowed to exert on human body parts during impact, along with a simple formula to obtain maximum allowed speed of the robot in the whole workspace. In this work, we measure the forces exerted by two collaborative manipulators (UR10e and KUKA LBR iiwa) moving downward against an impact measuring device. First, we empirically show that the impact forces can vary by more than 100 percent within the robot workspace. The forces are negatively correlated with the distance from the robot base and the height in the workspace. Second, we present a data-driven model, 3D Collision-Force-Map, predicting impact forces from distance, height, and velocity and demonstrate that it can be trained on a limited number of data points. Third, we analyze the force evolution upon impact and find that clamping never occurs for the UR10e. We show that formulas relating robot mass, velocity, and impact forces from ISO/TS 15066 are insufficient -- leading both to significant underestimation and overestimation and thus to unnecessarily long cycle times or even dangerous applications. We propose an empirical method that can be deployed to quickly determine the optimal speed and position where a task can be safely performed with maximum efficiency.
Human-robot interactions have been recognized to be a key element of future industrial collaborative robots (co-robots). Unlike traditional robots that work in structured and deterministic environments, co-robots need to operate in highly unstructure d and stochastic environments. To ensure that co-robots operate efficiently and safely in dynamic uncertain environments, this paper introduces the robot safe interaction system. In order to address the uncertainties during human-robot interactions, a unique parallel planning and control architecture is proposed, which has a long term global planner to ensure efficiency of robot behavior, and a short term local planner to ensure real time safety under uncertainties. In order for the robot to respond immediately to environmental changes, fast algorithms are used for real-time computation, i.e., the convex feasible set algorithm for the long term optimization, and the safe set algorithm for the short term optimization. Several test platforms are introduced for safe evaluation of the developed system in the early phase of deployment. The effectiveness and the efficiency of the proposed method have been verified in experiment with an industrial robot manipulator.
In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: Our approach learns structural representations of human-human (and human-obje ct-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions. We introduce the representation of social affordance and propose a generative model for its weakly supervised learning from human demonstration videos. Our approach discovers critical steps (i.e., latent sub-events) in an interaction and the typical motion associated with them, learning what body-parts should be involved and how. The experimental results demonstrate that our Markov Chain Monte Carlo (MCMC) based learning algorithm automatically discovers semantically meaningful interactive affordance from RGB-D videos, which allows us to generate appropriate full body motion for an agent.
Robot capabilities are maturing across domains, from self-driving cars, to bipeds and drones. As a result, robots will soon no longer be confined to safety-controlled industrial settings; instead, they will directly interact with the general public. The growing field of Human-Robot Interaction (HRI) studies various aspects of this scenario - from social norms to joint action to human-robot teams and more. Researchers in HRI have made great strides in developing models, methods, and algorithms for robots acting with and around humans, but these computational HRI models and algorithms generally do not come with formal guarantees and constraints on their operation. To enable human-interactive robots to move from the lab to real-world deployments, we must address this gap. This article provides an overview of verification, validation and synthesis techniques used to create demonstrably trustworthy systems, describes several HRI domains that could benefit from such techniques, and provides a roadmap for the challenges and the research needed to create formalized and guaranteed human-robot interaction.
Human-robot collaborations have been recognized as an essential component for future factories. It remains challenging to properly design the behavior of those co-robots. Those robots operate in dynamic uncertain environment with limited computation capacity. The design objective is to maximize their task efficiency while guaranteeing safety. This paper discusses a set of design principles of a safe and efficient robot collaboration system (SERoCS) for the next generation co-robots, which consists of robust cognition algorithms for environment monitoring, efficient task planning algorithms for reference generations, and safe motion planning and control algorithms for safe human-robot interactions. The proposed SERoCS will address the design challenges and significantly expand the skill sets of the co-robots to allow them to work safely and efficiently with their human counterparts. The development of SERoCS will create a significant advancement toward adoption of co-robots in various industries. The experiments validate the effectiveness of SERoCS.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا