ترغب بنشر مسار تعليمي؟ اضغط هنا

The homunculus for proprioception: Toward learning the representation of a humanoid robots joint space using self-organizing maps

92   0   0.0 ( 0 )
 نشر من قبل Filipe Gama
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

In primate brains, tactile and proprioceptive inputs are relayed to the somatosensory cortex which is known for somatotopic representations, or, homunculi. Our research centers on understanding the mechanisms of the formation of these and more higher-level body representations (body schema) by using humanoid robots and neural networks to construct models. We specifically focus on how spatial representation of the body may be learned from somatosensory information in self-touch configurations. In this work, we target the representation of proprioceptive inputs, which we take to be joint angles in the robot. The inputs collected in different body postures serve as inputs to a Self-Organizing Map (SOM) with a 2D lattice on the output. With unrestricted, all-to-all connections, the map is not capable of representing the input space while preserving the topological relationships, because the intrinsic dimensionality of the body posture space is too large. Hence, we use a method we developed previously for tactile inputs (Hoffmann, Straka et al. 2018) called MRF-SOM, where the Maximum Receptive Field of output neurons is restricted so they only learn to represent specific parts of the input space. This is in line with the receptive fields of neurons in somatosensory areas representing proprioception that often respond to combination of few joints (e.g. wrist and elbow).

قيم البحث

اقرأ أيضاً

The mechanisms of infant development are far from understood. Learning about ones own body is likely a foundation for subsequent development. Here we look specifically at the problem of how spontaneous touches to the body in early infancy may give ri se to first body models and bootstrap further development such as reaching competence. Unlike visually elicited reaching, reaching to own body requires connections of the tactile and motor space only, bypassing vision. Still, the problems of high dimensionality and redundancy of the motor system persist. In this work, we present an embodied computational model on a simulated humanoid robot with artificial sensitive skin on large areas of its body. The robot should autonomously develop the capacity to reach for every tactile sensor on its body. To do this efficiently, we employ the computational framework of intrinsic motivations and variants of goal babbling, as opposed to motor babbling, that prove to make the exploration process faster and alleviate the ill-posedness of learning inverse kinematics. Based on our results, we discuss the next steps in relation to infant studies: what information will be necessary to further ground this computational model in behavioral data.
This paper presents a new learning framework that leverages the knowledge from imitation learning, deep reinforcement learning, and control theories to achieve human-style locomotion that is natural, dynamic, and robust for humanoids. We proposed nov el approaches to introduce human bias, i.e. motion capture data and a special Multi-Expert network structure. We used the Multi-Expert network structure to smoothly blend behavioral features, and used the augmented reward design for the task and imitation rewards. Our reward design is composable, tunable, and explainable by using fundamental concepts from conventional humanoid control. We rigorously validated and benchmarked the learning framework which consistently produced robust locomotion behaviors in various test scenarios. Further, we demonstrated the capability of learning robust and versatile policies in the presence of disturbances, such as terrain irregularities and external pushes.
Humanoid robots could be versatile and intuitive human avatars that operate remotely in inaccessible places: the robot could reproduce in the remote location the movements of an operator equipped with a wearable motion capture device while sending vi sual feedback to the operator. While substantial progress has been made on transferring (retargeting) human motions to humanoid robots, a major problem preventing the deployment of such systems in real applications is the presence of communication delays between the human input and the feedback from the robot: even a few hundred milliseconds of delay can irreversibly disturb the operator, let alone a few seconds. To overcome these delays, we introduce a system in which a humanoid robot executes commands before it actually receives them, so that the visual feedback appears to be synchronized to the operator, whereas the robot executed the commands in the past. To do so, the robot continuously predicts future commands by querying a machine learning model that is trained on past trajectories and conditioned on the last received commands. In our experiments, an operator was able to successfully control a humanoid robot (32 degrees of freedom) with stochastic delays up to 2 seconds in several whole-body manipulation tasks, including reaching different targets, picking up, and placing a box at distinct locations.
Stable bipedal walking is a key prerequisite for humanoid robots to reach their potential of being versatile helpers in our everyday environments. Bipedal walking is, however, a complex motion that requires the coordination of many degrees of freedom while it is also inherently unstable and sensitive to disturbances. The balance of a walking biped has to be constantly maintained. The most effective way of controlling balance are well timed and placed recovery steps -- capture steps -- that absorb the expense momentum gained from a push or a stumble. We present a bipedal gait generation framework that utilizes step timing and foot placement techniques in order to recover the balance of a biped even after strong disturbances. Our framework modifies the next footstep location instantly when responding to a disturbance and generates controllable omnidirectional walking using only very little sensing and computational power. We exploit the open-loop stability of a central pattern generated gait to fit a linear inverted pendulum model to the observed center of mass trajectory. Then, we use the fitted model to predict suitable footstep locations and timings in order to maintain balance while following a target walking velocity. Our experiments show qualitative and statistical evidence of one of the strongest push-recovery capabilities among humanoid robots to date.
We perform a Systematic Literature Review to discover how Humanoid robots are being applied in Socially Assistive Robotics experiments. Our search returned 24 papers, from which 16 were included for closer analysis. To do this analysis we used a conc eptual framework inspired by Behavior-based Robotics. We were interested in finding out which robot was used (most use the robot NAO), what the goals of the application were (teaching, assisting, playing, instructing), how the robot was controlled (manually in most of the experiments), what kind of behaviors the robot exhibited (reacting to touch, pointing at body parts, singing a song, dancing, among others), what kind of actuators the robot used (always motors, sometimes speakers, hardly ever any other type of actuator) and what kind of sensors the robot used (in many studies the robot did not use any sensors at all, in others the robot frequently used camera and/or microphone). The results of this study can be used for designing software frameworks targeting Humanoid Socially Assistive Robotics, especially in the context of Software Product Line Engineering projects.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا