ﻻ يوجد ملخص باللغة العربية
Extensive experiments in Human Movement Science suggest that solo motions are characterized by unique features that define the individuality or motor signature of people. While interacting with others, humans tend to spontaneously coordinate their movement and unconsciously give rise to joint improvisation. However, it has yet to be shed light on the relationship between individuality and joint improvisation. By means of an ad-hoc virtual agent, in this work we uncover the internal mechanisms of the transition from solo to joint improvised motion in the mirror game, a simple yet effective paradigm for studying interpersonal human coordination. According to the analysis of experimental data, normalized segments of velocity in solo motion are regarded as individual motor signature, and the existence of velocity segments possessing a prescribed signature is theoretically guaranteed. In this work, we first develop a systematic approach based on velocity segments to generate emph{in-silico} trajectories of a given human participant playing solo. Then we present an online algorithm for the virtual player to produce joint improvised motion with another agent while exhibiting some desired kinematic characteristics, and to account for movement coordination and mutual adaptation during joint action tasks. Finally, we demonstrate that the proposed approach succeeds in revealing the kinematic features transition from solo to joint improvised motions, thus revealing the existence of a tight relationship between individuality and joint improvisation.
Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) tog
A resource selection function is a model of the likelihood that an available spatial unit will be used by an animal, given its resource value. But how do we appropriately define availability? Step-selection analysis deals with this problem at the sca
Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of
We demonstrate that the choice of optimizer, neural network architecture, and regularizer significantly affect the adversarial robustness of linear neural networks, providing guarantees without the need for adversarial training. To this end, we revis
We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consist