Do you want to publish a course? Click here

Data Driven Validation Framework for Multi-agent Activity-based Models

148   0   0.0 ( 0 )
 Added by Jan Drchal
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Activity-based models, as a specific instance of agent-based models, deal with agents that structure their activity in terms of (daily) activity schedules. An activity schedule consists of a sequence of activity instances, each with its assigned start time, duration and location, together with transport modes used for travel between subsequent activity locations. A critical step in the development of simulation models is validation. Despite the growing importance of activity-based models in modelling transport and mobility, there has been so far no work focusing specifically on statistical validation of such models. In this paper, we propose a six-step Validation Framework for Activity-based Models (VALFRAM) that allows exploiting historical real-world data to assess the validity of activity-based models. The framework compares temporal and spatial properties and the structure of activity schedules against real-world travel diaries and origin-destination matrices. We confirm the usefulness of the framework on three real-world activity-based transport models.



rate research

Read More

Population-based multi-agent reinforcement learning (PB-MARL) refers to the series of methods nested with reinforcement learning (RL) algorithms, which produces a self-generated sequence of tasks arising from the coupled population dynamics. By leveraging auto-curricula to induce a population of distinct emergent strategies, PB-MARL has achieved impressive success in tackling multi-agent tasks. Despite remarkable prior arts of distributed RL frameworks, PB-MARL poses new challenges for parallelizing the training frameworks due to the additional complexity of multiple nested workloads between sampling, training and evaluation involved with heterogeneous policy interactions. To solve these problems, we present MALib, a scalable and efficient computing framework for PB-MARL. Our framework is comprised of three key components: (1) a centralized task dispatching model, which supports the self-generated tasks and scalable training with heterogeneous policy combinations; (2) a programming architecture named Actor-Evaluator-Learner, which achieves high parallelism for both training and sampling, and meets the evaluation requirement of auto-curriculum learning; (3) a higher-level abstraction of MARL training paradigms, which enables efficient code reuse and flexible deployments on different distributed computing paradigms. Experiments on a series of complex tasks such as multi-agent Atari Games show that MALib achieves throughput higher than 40K FPS on a single machine with $32$ CPU cores; 5x speedup than RLlib and at least 3x speedup than OpenSpiel in multi-agent training tasks. MALib is publicly available at https://github.com/sjtu-marl/malib.
In many specific scenarios, accurate and effective system identification is a commonly encountered challenge in the model predictive control (MPC) formulation. As a consequence, the overall system performance could be significantly degraded in outcome when the traditional MPC algorithm is adopted under those circumstances when such accuracy is lacking. To cater to this rather major shortcoming, this paper investigates a non-parametric behavior learning method for multi-agent decision making, which underpins an alternate data-driven predictive control framework. Utilizing an innovative methodology with closed-loop input/output measurements of the unknown system, the behavior of the system is learned based on the collected dataset, and thus the constructed non-parametric predictive model can be used for the determination of optimal control actions. This non-parametric predictive control framework attains the noteworthy key advantage of alleviating the heavy computational burden commonly encountered in the optimization procedures otherwise involved. Such requisite optimization procedures are typical in existing methodologies requiring open-loop input/output measurement data collection and parametric system identification. Then with a conservative approximation of probabilistic chance constraints for the MPC problem, a resulting deterministic optimization problem is formulated and solved effectively. This intuitive data-driven approach is also shown to preserve good robustness properties (even in the inevitable existence of parametric uncertainties that naturally arise in the typical system identification process). Finally, a multi-drone system is used to demonstrate the practical appeal and highly effective outcome of this promising development.
Simulating and predicting planetary-scale techno-social systems poses heavy computational and modeling challenges. The DARPA SocialSim program set the challenge to model the evolution of GitHub, a large collaborative software-development ecosystem, using massive multi-agent simulations. We describe our best performing models and our agent-based simulation framework, which we are currently extending to allow simulating other planetary-scale techno-social systems. The challenge problem measured participants ability, given 30 months of meta-data on user activity on GitHub, to predict the next months activity as measured by a broad range of metrics applied to ground truth, using agent-based simulation. The challenge required scaling to a simulation of roughly 3 million agents producing a combined 30 million actions, acting on 6 million repositories with commodity hardware. It was also important to use the data optimally to predict the agents next moves. We describe the agent framework and the data analysis employed by one of the winning teams in the challenge. Six different agent models were tested based on a variety of machine learning and statistical methods. While no single method proved the most accurate on every metric, the broadly most successful sampled from a stationary probability distribution of actions and repositories for each agent. Two reasons for the success of these agents were their use of a distinct characterization of each agent, and that GitHub users change their behavior relatively slowly.
Trajectory interpolation, the process of filling-in the gaps and removing noise from observed agent trajectories, is an essential task for the motion inference in multi-agent setting. A desired trajectory interpolation method should be robust to noise, changes in environments or agent densities, while also being yielding realistic group movement behaviors. Such realistic behaviors are, however, challenging to model as they require avoidance of agent-agent or agent-environment collisions and, at the same time, seek computational efficiency. In this paper, we propose a novel framework composed of data-driven priors (local, global or combined) and an efficient optimization strategy for multi-agent trajectory interpolation. The data-driven priors implicitly encode the dependencies of movements of multiple agents and the collision-avoiding desiderata, enabling elimination of costly pairwise collision constraints and resulting in reduced computational complexity and often improved estimation. Various combinations of priors and optimization algorithms are evaluated in comprehensive simulated experiments. Our experimental results reveal important insights, including the significance of the global flow prior and the lesser-than-expected influence of data-driven collision priors.
Collective or group intelligence is manifested in the fact that a team of cooperating agents can solve problems more efficiently than when those agents work in isolation. Although cooperation is, in general, a successful problem solving strategy, it is not clear whether it merely speeds up the time to find the solution, or whether it alters qualitatively the statistical signature of the search for the solution. Here we review and offer insights on two agent-based models of distributed cooperative problem-solving systems, whose task is to solve a cryptarithmetic puzzle. The first model is the imitative learning search in which the agents exchange information on the quality of their partial solutions to the puzzle and imitate the most successful agent in the group. This scenario predicts a very poor performance in the case imitation is too frequent or the group is too large, a phenomenon akin to Groupthink of social psychology. The second model is the blackboard organization in which agents read and post hints on a public blackboard. This brainstorming scenario performs the best when there is a stringent limit to the amount of information that is exhibited on the board. Both cooperative scenarios produce a substantial speed up of the time to solve the puzzle as compared with the situation where the agents work in isolation. The statistical signature of the search, however, is the same as that of the independent search.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا