Do you want to publish a course? Click here

Quantitative Characterization of Randomly Roving Agents

240   0   0.0 ( 0 )
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

Quantitative characterization of randomly roving agents in Agent Based Intrusion Detection Environment (ABIDE) is studied. Formula simplifications regarding known results and publications are given. Extended Agent Based Intrusion Detection Environment (EABIDE) is introduced and quantitative characterization of roving agents in EABIDE is studies.



rate research

Read More

122 - Hakob Aslanyan , Jose Rolim 2011
Quantitative characterization of randomly roving agents in wireless sensor networks (WSN) is studied. Below the formula simplifications, regarding the known results and publications, it is shown that the basic agent model is probabilistically equivalent to a similar simpler model and then a formula for frequencies is achieved in terms of combinatorial second kind Stirling numbers. Stirling numbers are well studied and different estimates are known for them letting to justify the roving agents quantitative characteristics.
Collision-free or contact-free routing through connected networks has been actively studied in the industrial automation and manufacturing context. Contact-free routing of personnel through connected networks (e.g., factories, retail warehouses) may also be required in the COVID-19 context. In this context, we present an optimization framework for identifying routes through a connected network that eliminate or minimize contacts between randomly arriving agents needing to visit a subset of nodes in the network in minimal time. We simulate the agent arrival and network traversal process, and introduce stochasticity in travel speeds, node dwell times, and compliance with assigned routes. We present two optimization formulations for generating optimal routes - no-contact and minimal-contact - on a real-time basis for each agent arriving to the network given the route information of other agents already in the network. We generate results for the time-average number of contacts and normalized time spent in the network.
In this paper, we design a greedy routing on networks of mobile agents. In the greedy routing algorithm, every time step a packet in agent $i$ is delivered to the agent $j$ whose distance from the destination is shortest among searched neighbors of agent $i$. Based on the greedy routing, we study the traffic dynamics and traffic-driven epidemic spreading on networks of mobile agents. We find that the transportation capacity of networks and the epidemic threshold increase as the communication radius increases. For moderate moving speed, the transportation capacity of networks is the highest and the epidemic threshold maintains a large value. These results can help controlling the traffic congestion and epidemic spreading on mobile networks.
This article deals with localization probability in a network of randomly distributed communication nodes contained in a bounded domain. A fraction of the nodes denoted as L-nodes are assumed to have localization information while the rest of the nodes denoted as NL nodes do not. The basic model assumes each node has a certain radio coverage within which it can make relative distance measurements. We model both the case radio coverage is fixed and the case radio coverage is determined by signal strength measurements in a Log-Normal Shadowing environment. We apply the probabilistic method to determine the probability of NL-node localization as a function of the coverage area to domain area ratio and the density of L-nodes. We establish analytical expressions for this probability and the transition thresholds with respect to key parameters whereby marked change in the probability behavior is observed. The theoretical results presented in the article are supported by simulations.
By studying the underlying policies of decision-making agents, we can learn about their shortcomings and potentially improve them. Traditionally, this has been done either by examining the agents implementation, its behaviour while it is being executed, its performance with a reward/fitness function or by visualizing the density of states the agent visits. However, these methods fail to describe the policys behaviour in complex, high-dimensional environments or do not scale to thousands of policies, which is required when studying training algorithms. We propose policy supervectors for characterizing agents by the distribution of states they visit, adopting successful techniques from the area of speech technology. Policy supervectors can characterize policies regardless of their design philosophy (e.g. rule-based vs. neural networks) and scale to thousands of policies on a single workstation machine. We demonstrate methods applicability by studying the evolution of policies during reinforcement learning, evolutionary training and imitation learning, providing insight on e.g. how the search space of evolutionary algorithms is also reflected in agents behaviour, not just in the parameters.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا