Do you want to publish a course? Click here

Transfer entropy dependent on distance among agents in quantifying leader-follower relationships

389   0   0.0 ( 0 )
 Added by Tamiki Komatsuzaki
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Synchronized movement of (both unicellular and multicellular) systems can be observed almost everywhere. Understanding of how organisms are regulated to synchronized behavior is one of the challenging issues in the field of collective motion. It is hypothesized that one or a few agents in a group regulate(s) the dynamics of the whole collective, known as leader(s). The identification of the leader (influential) agent(s) is very crucial. This article reviews different mathematical models that represent different types of leadership. We focus on the improvement of the leader-follower classification problem. It was found using a simulation model that the use of interaction domain information significantly improves the leader-follower classification ability using both linear schemes and information-theoretic schemes for quantifying influence. This article also reviews different schemes that can be used to identify the interaction domain using the motion data of agents.



rate research

Read More

70 - Jinha Park , B. Kahng 2020
The features of animal population dynamics, for instance, flocking and migration, are often synchronized for survival under large-scale climate change or perceived threats. These coherent phenomena have been explained using synchronization models. However, such models do not take into account asynchronous and adaptive updating of an individuals status at each time. Here, we modify the Kuramoto model slightly by classifying oscillators as leaders or followers, according to their angular velocity at each time, where individuals interact asymmetrically according to their leader/follower status. As the angular velocities of the oscillators are updated, the leader and follower status may also be reassigned. Owing to this adaptive dynamics, oscillators may cooperate by taking turns acting as a leader or follower. This may result in intriguing patterns of synchronization transitions, including hybrid phase transitions, and produce the leader-follower switching pattern observed in bird migration patterns.
Hierarchical networks are prevalent in nature and society, corresponding to groups of actors - animals, humans or even robots - organised according to a pyramidal structure with decision makers at the top and followers at the bottom. While this phenomenon is seemingly universal, the underlying governing principles are poorly understood. Here we study the emergence of hierarchies in groups of people playing a simple dot guessing game in controlled experiments, lasting for about 40 rounds, conducted over the Internet. During the games, the players had the possibility to look at the answer of a limited number of other players of their choice. This act of asking for advice defines a directed connection between the involved players, and according to our analysis, the initial random configuration of the emerging networks became more structured overt time, showing signs of hierarchy towards the end of the game. In addition, the achieved score of the players appeared to be correlated with their position in the hierarchy. These results indicate that under certain conditions imitation and limited knowledge about the performance of other actors is sufficient for the emergence of hierarchy in a social group.
Transfer Learning has shown great potential to enhance the single-agent Reinforcement Learning (RL) efficiency. Similarly, Multiagent RL (MARL) can also be accelerated if agents can share knowledge with each other. However, it remains a problem of how an agent should learn from other agents. In this paper, we propose a novel Multiagent Option-based Policy Transfer (MAOPT) framework to improve MARL efficiency. MAOPT learns what advice to provide and when to terminate it for each agent by modeling multiagent policy transfer as the option learning problem. Our framework provides two kinds of option learning methods in terms of what experience is used during training. One is the global option advisor, which uses the global experience for the update. The other is the local option advisor, which uses each agents local experience when only each agents local experiences can be obtained due to partial observability. While in this setting, each agents experience may be inconsistent with each other, which may cause the inaccuracy and oscillation of the option-values estimation. Therefore, we propose the successor representation option learning to solve it by decoupling the environment dynamics from rewards and learning the option-value under each agents preference. MAOPT can be easily combined with existing deep RL and MARL approaches, and experimental results show it significantly boosts the performance of existing methods in both discrete and continuous state spaces.
The concept of leader--follower (or Stackelberg) equilibrium plays a central role in a number of real--world applications of game theory. While the case with a single follower has been thoroughly investigated, results with multiple followers are only sporadic and the problem of designing and evaluating computationally tractable equilibrium-finding algorithms is still largely open. In this work, we focus on the fundamental case where multiple followers play a Nash equilibrium once the leader has committed to a strategy---as we illustrate, the corresponding equilibrium finding problem can be easily shown to be $mathcal{FNP}$--hard and not in Poly--$mathcal{APX}$ unless $mathcal{P} = mathcal{NP}$ and therefore it is one among the hardest problems to solve and approximate. We propose nonconvex mathematical programming formulations and global optimization methods to find both exact and approximate equilibria, as well as a heuristic black box algorithm. All the methods and formulations that we introduce are thoroughly evaluated computationally.
146 - Su Zhang , Yi Ding , Ziquan Wei 2021
We propose an audio-visual spatial-temporal deep neural network with: (1) a visual block containing a pretrained 2D-CNN followed by a temporal convolutional network (TCN); (2) an aural block containing several parallel TCNs; and (3) a leader-follower attentive fusion block combining the audio-visual information. The TCN with large history coverage enables our model to exploit spatial-temporal information within a much larger window length (i.e., 300) than that from the baseline and state-of-the-art methods (i.e., 36 or 48). The fusion block emphasizes the visual modality while exploits the noisy aural modality using the inter-modality attention mechanism. To make full use of the data and alleviate over-fitting, cross-validation is carried out on the training and validation set. The concordance correlation coefficient (CCC) centering is used to merge the results from each fold. On the test (validation) set of the Aff-Wild2 database, the achieved CCC is 0.463 (0.469) for valence and 0.492 (0.649) for arousal, which significantly outperforms the baseline method with the corresponding CCC of 0.200 (0.210) and 0.190 (0.230) for valence and arousal, respectively. The code is available at https://github.com/sucv/ABAW2.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا