ترغب بنشر مسار تعليمي؟ اضغط هنا

An Unified Intelligence-Communication Model for Multi-Agent System Part-I: Overview

56   0   0.0 ( 0 )
 نشر من قبل Bo Zhang
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Motivated by Shannons model and recent rehabilitation of self-supervised artificial intelligence having a World Model, this paper propose an unified intelligence-communication (UIC) model for describing a single agent and any multi-agent system. Firstly, the environment is modelled as the generic communication channel between agents. Secondly, the UIC model adopts a learning-agent model for unifying several well-adopted agent architecture, e.g. rule-based agent model in complex adaptive systems, layered model for describing human-level intelligence, world-model based agent model. The model may also provide an unified approach to investigate a multi-agent system (MAS) having multiple action-perception modalities, e.g. explicitly information transfer and implicit information transfer. This treatise would be divided into three parts, and this first part provides an overview of the UIC model without introducing cumbersome mathematical analysis and optimizations. In the second part of this treatise, case studies with quantitative analysis driven by the UIC model would be provided, exemplifying the adoption of the UIC model in multi-agent system. Specifically, two representative cases would be studied, namely the analysis of a natural multi-agent system, as well as the co-design of communication, perception and action in an artificial multi-agent system. In the third part of this treatise, the paper provides further insights and future research directions motivated by the UIC model, such as unification of single intelligence and collective intelligence, a possible explanation of intelligence emergence and a dual model for agent-environment intelligence hypothesis. Notes: This paper is a Previewed Version, the extended full-version would be released after being accepted.



قيم البحث

اقرأ أيضاً

105 - Xinzhi Wang , Huao Li , Hui Zhang 2020
Recently, there has been increasing interest in transparency and interpretability in Deep Reinforcement Learning (DRL) systems. Verbal explanations, as the most natural way of communication in our daily life, deserve more attention, since they allow users to gain a better understanding of the system which ultimately could lead to a high level of trust and smooth collaboration. This paper reports a novel work in generating verbal explanations for DRL behaviors agent. A rule-based model is designed to construct explanations using a series of rules which are predefined with prior knowledge. A learning model is then proposed to expand the implicit logic of generating verbal explanation to general situations by employing rule-based explanations as training data. The learning model is shown to have better flexibility and generalizability than the static rule-based model. The performance of both models is evaluated quantitatively through objective metrics. The results show that verbal explanation generated by both models improve subjective satisfaction of users towards the interpretability of DRL systems. Additionally, seven variants of the learning model are designed to illustrate the contribution of input channels, attention mechanism, and proposed encoder in improving the quality of verbal explanation.
In this work, we propose a novel memory-based multi-agent meta-learning architecture and learning procedure that allows for learning of a shared communication policy that enables the emergence of rapid adaptation to new and unseen environments by lea rning to learn learning algorithms through communication. Behavior, adaptation and learning to adapt emerges from the interactions of homogeneous experts inside a single agent. The proposed architecture should allow for generalization beyond the level seen in existing methods, in part due to the use of a single policy shared by all experts within the agent as well as the inherent modularity of Badger.
Recent studies have shown that introducing communication between agents can significantly improve overall performance in cooperative Multi-agent reinforcement learning (MARL). However, existing communication schemes often require agents to exchange a n excessive number of messages at run-time under a reliable communication channel, which hinders its practicality in many real-world situations. In this paper, we present textit{Temporal Message Control} (TMC), a simple yet effective approach for achieving succinct and robust communication in MARL. TMC applies a temporal smoothing technique to drastically reduce the amount of information exchanged between agents. Experiments show that TMC can significantly reduce inter-agent communication overhead without impacting accuracy. Furthermore, TMC demonstrates much better robustness against transmission loss than existing approaches in lossy networking environments.
Inter-agent communication can significantly increase performance in multi-agent tasks that require co-ordination to achieve a shared goal. Prior work has shown that it is possible to learn inter-agent communication protocols using multi-agent reinfor cement learning and message-passing network architectures. However, these models use an unconstrained broadcast communication model, in which an agent communicates with all other agents at every step, even when the task does not require it. In real-world applications, where communication may be limited by system constraints like bandwidth, power and network capacity, one might need to reduce the number of messages that are sent. In this work, we explore a simple method of minimizing communication while maximizing performance in multi-task learning: simultaneously optimizing a task-specific objective and a communication penalty. We show that the objectives can be optimized using Reinforce and the Gumbel-Softmax reparameterization. We introduce two techniques to stabilize training: 50% training and message forwarding. Training with the communication penalty on only 50% of the episodes prevents our models from turning off their outgoing messages. Second, repeating messages received previously helps models retain information, and further improves performance. With these techniques, we show that we can reduce communication by 75% with no loss of performance.
Agents are systems that optimize an objective function in an environment. Together, the goal and the environment induce secondary objectives, incentives. Modeling the agent-environment interaction using causal influence diagrams, we can answer two fu ndamental questions about an agents incentives directly from the graph: (1) which nodes can the agent have an incentivize to observe, and (2) which nodes can the agent have an incentivize to control? The answers tell us which information and influence points need extra protection. For example, we may want a classifier for job applications to not use the ethnicity of the candidate, and a reinforcement learning agent not to take direct control of its reward mechanism. Different algorithms and training paradigms can lead to different causal influence diagrams, so our method can be used to identify algorithms with problematic incentives and help in designing algorithms with better incentives.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا