No Arabic abstract
Value-based argumentation enhances a classical abstract argumentation graph - in which arguments are modelled as nodes connected by directed arrows called attacks - with labels on arguments, called values, and an ordering on values, called audience, to provide a more fine-grained justification of the attack relation. With more than one agent facing such an argumentation problem, agents may differ in their ranking of values. When needing to reach a collective view, such agents face a dilemma between two equally justifiable approaches: aggregating their views at the level of values, or aggregating their attack relations, remaining therefore at the level of the graphs. We explore the strenghts and limitations of both approaches, employing techniques from preference aggregation and graph aggregation, and propose a third possibility aggregating rankings extracted from given attack relations.
In many real-life situations that involve exchanges of arguments, individuals may differ on their assessment of which supports between the arguments are in fact justified, i.e., they put forward different support-relations. When confronted with such situations, we may wish to aggregate individuals argumentation views on support-relations into a collective view, which is acceptable to the group. In this paper, we assume that under bipolar argumentation frameworks, individuals are equipped with a set of arguments and a set of attacks between arguments, but with possibly different support-relations. Using the methodology in social choice theory, we analyze what semantic properties of bipolar argumentation frameworks can be preserved by aggregation rules during the aggregation of support-relations.
In the real world, many tasks require multiple agents to cooperate with each other under the condition of local observations. To solve such problems, many multi-agent reinforcement learning methods based on Centralized Training with Decentralized Execution have been proposed. One representative class of work is value decomposition, which decomposes the global joint Q-value $Q_text{jt}$ into individual Q-values $Q_a$ to guide individuals behaviors, e.g. VDN (Value-Decomposition Networks) and QMIX. However, these baselines often ignore the randomness in the situation. We propose MMD-MIX, a method that combines distributional reinforcement learning and value decomposition to alleviate the above weaknesses. Besides, to improve data sampling efficiency, we were inspired by REM (Random Ensemble Mixture) which is a robust RL algorithm to explicitly introduce randomness into the MMD-MIX. The experiments demonstrate that MMD-MIX outperforms prior baselines in the StarCraft Multi-Agent Challenge (SMAC) environment.
Collaboration requires agents to align their goals on the fly. Underlying the human ability to align goals with other agents is their ability to predict the intentions of others and actively update their own plans. We propose hierarchical predictive planning (HPP), a model-based reinforcement learning method for decentralized multiagent rendezvous. Starting with pretrained, single-agent point to point navigation policies and using noisy, high-dimensional sensor inputs like lidar, we first learn via self-supervision motion predictions of all agents on the team. Next, HPP uses the prediction models to propose and evaluate navigation subgoals for completing the rendezvous task without explicit communication among agents. We evaluate HPP in a suite of unseen environments, with increasing complexity and numbers of obstacles. We show that HPP outperforms alternative reinforcement learning, path planning, and heuristic-based baselines on challenging, unseen environments. Experiments in the real world demonstrate successful transfer of the prediction models from sim to real world without any additional fine-tuning. Altogether, HPP removes the need for a centralized operator in multiagent systems by combining model-based RL and inference methods, enabling agents to dynamically align plans.
To choose a suitable multiwinner voting rule is a hard and ambiguous task. Depending on the context, it varies widely what constitutes the choice of an ``optimal subset of alternatives. In this paper, we provide a quantitative analysis of multiwinner voting rules using methods from the theory of approximation algorithms---we estimate how well multiwinner rules approximate two extreme objectives: a representation criterion defined via the Approval Chamberlin--Courant rule and a utilitarian criterion defined via Multiwinner Approval Voting. With both theoretical and experimental methods, we classify multiwinner rules in terms of their quantitative alignment with these two opposing objectives. Our results provide fundamental information about the nature of multiwinner rules and, in particular, about the necessary tradeoffs when choosing such a rule.
In this article, we study the problem of air-to-ground ultra-reliable and low-latency communication (URLLC) for a moving ground user. This is done by controlling multiple unmanned aerial vehicles (UAVs) in real time while avoiding inter-UAV collisions. To this end, we propose a novel multi-agent deep reinforcement learning (MADRL) framework, coined a graph attention exchange network (GAXNet). In GAXNet, each UAV constructs an attention graph locally measuring the level of attention to its neighboring UAVs, while exchanging the attention weights with other UAVs so as to reduce the attention mismatch between them. Simulation results corroborates that GAXNet achieves up to 4.5x higher rewards during training. At execution, without incurring inter-UAV collisions, GAXNet achieves 6.5x lower latency with the target 0.0000001 error rate, compared to a state-of-the-art baseline framework.