ترغب بنشر مسار تعليمي؟ اضغط هنا

Modelling and Estimation of Human Walking Gait for Physical Human-Robot Interaction

153   0   0.0 ( 0 )
 نشر من قبل Yash Vyas
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

An approach to model and estimate human walking kinematics in real-time for Physical Human-Robot Interaction is presented. The human gait velocity along the forward and vertical direction of motion is modelled according to the Yoyo-model. We designed an Extended Kalman Filter (EKF) algorithm to estimate the frequency, bias and trigonometric state of a biased sinusoidal signal, from which the kinematic parameters of the Yoyo-model can be extracted. Quality and robustness of the estimation are improved by opportune filtering based on heuristics. The approach is successfully evaluated on a real dataset of walking humans, including complex trajectories and changing step frequency over time.


قيم البحث

اقرأ أيضاً

Robot capabilities are maturing across domains, from self-driving cars, to bipeds and drones. As a result, robots will soon no longer be confined to safety-controlled industrial settings; instead, they will directly interact with the general public. The growing field of Human-Robot Interaction (HRI) studies various aspects of this scenario - from social norms to joint action to human-robot teams and more. Researchers in HRI have made great strides in developing models, methods, and algorithms for robots acting with and around humans, but these computational HRI models and algorithms generally do not come with formal guarantees and constraints on their operation. To enable human-interactive robots to move from the lab to real-world deployments, we must address this gap. This article provides an overview of verification, validation and synthesis techniques used to create demonstrably trustworthy systems, describes several HRI domains that could benefit from such techniques, and provides a roadmap for the challenges and the research needed to create formalized and guaranteed human-robot interaction.
In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: Our approach learns structural representations of human-human (and human-obje ct-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions. We introduce the representation of social affordance and propose a generative model for its weakly supervised learning from human demonstration videos. Our approach discovers critical steps (i.e., latent sub-events) in an interaction and the typical motion associated with them, learning what body-parts should be involved and how. The experimental results demonstrate that our Markov Chain Monte Carlo (MCMC) based learning algorithm automatically discovers semantically meaningful interactive affordance from RGB-D videos, which allows us to generate appropriate full body motion for an agent.
When a robot performs a task next to a human, physical interaction is inevitable: the human might push, pull, twist, or guide the robot. The state-of-the-art treats these interactions as disturbances that the robot should reject or avoid. At best, th ese robots respond safely while the human interacts; but after the human lets go, these robots simply return to their original behavior. We recognize that physical human-robot interaction (pHRI) is often intentional -- the human intervenes on purpose because the robot is not doing the task correctly. In this paper, we argue that when pHRI is intentional it is also informative: the robot can leverage interactions to learn how it should complete the rest of its current task even after the person lets go. We formalize pHRI as a dynamical system, where the human has in mind an objective function they want the robot to optimize, but the robot does not get direct access to the parameters of this objective -- they are internal to the human. Within our proposed framework human interactions become observations about the true objective. We introduce approximations to learn from and respond to pHRI in real-time. We recognize that not all human corrections are perfect: often users interact with the robot noisily, and so we improve the efficiency of robot learning from pHRI by reducing unintended learning. Finally, we conduct simulations and user studies on a robotic manipulator to compare our proposed approach to the state-of-the-art. Our results indicate that learning from pHRI leads to better task performance and improved human satisfaction.
113 - Visak Kumar 2021
In this work, we develop an automated method to generate 3D human walking motion in simulation which is comparable to real-world human motion. At the core, our work leverages the ability of deep reinforcement learning methods to learn high-dimensiona l motor skills while being robust to variations in the environment dynamics. Our approach iterates between policy learning and parameter identification to match the real-world bio-mechanical human data. We present a thorough evaluation of the kinematics, kinetics and ground reaction forces generated by our learned virtual human agent. We also show that the method generalizes well across human-subjects with different kinematic structure and gait-characteristics.
Today, physical Human-Robot Interaction (pHRI) is a very popular topic in the field of ground manipulation. At the same time, Aerial Physical Interaction (APhI) is also developing very fast. Nevertheless, pHRI with aerial vehicles has not been addres sed so far. In this work, we present the study of one of the first systems in which a human is physically connected to an aerial vehicle by a cable. We want the robot to be able to pull the human toward a desired position (or along a path) only using forces as an indirect communication-channel. We propose an admittance-based approach that makes pHRI safe. A controller, inspired by the literature on flexible manipulators, computes the desired interaction forces that properly guide the human. The stability of the system is formally proved with a Lyapunov-based argument. The system is also shown to be passive, and thus robust to non-idealities like additional human forces, time-varying inputs, and other external disturbances. We also design a maneuver regulation policy to simplify the path following problem. The global method has been experimentally validated on a group of four subjects, showing a reliable and safe pHRI.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا