ترغب بنشر مسار تعليمي؟ اضغط هنا

Scalability and Optimisation of a Committee of Agents Using Genetic Algorithm

42   0   0.0 ( 0 )
 نشر من قبل Tshilidzi Marwala
 تاريخ النشر 2007
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A population of committees of agents that learn by using neural networks is implemented to simulate the stock market. Each committee of agents, which is regarded as a player in a game, is optimised by continually adapting the architecture of the agents using genetic algorithms. The committees of agents buy and sell stocks by following this procedure: (1) obtain the current price of stocks; (2) predict the future price of stocks; (3) and for a given price trade until all the players are mutually satisfied. The trading of stocks is conducted by following these rules: (1) if a player expects an increase in price then it tries to buy the stock; (2) else if it expects a drop in the price, it sells the stock; (3)and the order in which a player participates in the game is random. The proposed procedure is implemented to simulate trading of three stocks, namely, the Dow Jones, the Nasdaq and the S&P 500. A linear relationship between the number of players and agents versus the computational time to run the complete simulation is observed. It is also found that no player has a monopolistic advantage.

قيم البحث

اقرأ أيضاً

Electricity market modelling is often used by governments, industry and agencies to explore the development of scenarios over differing timeframes. For example, how would the reduction in cost of renewable energy impact investments in gas power plant s or what would be an optimum strategy for carbon tax or subsidies? Cost optimization based solutions are the dominant approach for understanding different long-term energy scenarios. However, these types of models have certain limitations such as the need to be interpreted in a normative manner, and the assumption that the electricity market remains in equilibrium throughout. Through this work, we show that agent-based models are a viable technique to simulate decentralised electricity markets. The aim of this paper is to validate an agent-based modelling framework to increase confidence in its ability to be used in policy and decision making. Our framework can model heterogeneous agents with imperfect information. The model uses a rules-based approach to approximate the underlying dynamics of a real world, decentralised electricity market. We use the UK as a case study, however, our framework is generalisable to other countries. We increase the temporal granularity of the model by selecting representative days of electricity demand and weather using a $k$-means clustering approach. We show that our framework can model the transition from coal to gas observed in the UK between 2013 and 2018. We are also able to simulate a future scenario to 2035 which is similar to the UK Government, Department for Business and Industrial Strategy (BEIS) projections. We show a more realistic increase in nuclear power over this time period. This is due to the fact that with current nuclear technology, electricity is generated almost instantaneously and has a low short-run marginal cost cite{Department2016}.
We have conceived and implemented a multi-objective genetic algorithm (GA) code for the optimisation of an array of Imaging Atmospheric Cherenkov Telescopes (IACTs). The algorithm takes as input a series of cost functions (metrics) each describing a different objetive of the optimisation (such as effective area, angular resolution, etc.), all of which are expressed in terms of the relative position of the telescopes in the plane. The output of the algorithm is a family of geometrical arrangements which correspond to the complete set of solutions to the array optimisation problem, and differ from each other according to the relative weight given to each of the (maybe conflicting) objetives of the optimisation. Since the algorithm works with parallel optimisation it admits as many cost functions as desired, and can incorporate constraints such as budget (cost cap) for the array and topological limitations of the terrain, like geographical accidents where telescopes cannot be installed. It also admits different types of telescopes (hybrid arrays) and the number of telescopes of each type can be treated as a parameter to be optimised - constrained, for example, by the cost of each type or the energy range of interest. The purpose of the algorithm, which converges fast to optimised solutions (if compared to the time for a complete Monte Carlo Simulation of a single configuration), is to provide a tool to investigate the full parameter space of possible geometries, and help in designing complex arrays. It does not substitute a detailed Monte Carlo study, but aims to guide it. In the examples of arrays shown here we have used as metrics simple heuristic expressions describing the fundamentals of the IAC technique, but these input functions can be made as detailed or complex as desired for a given experiment.
In recent years, graph neural networks (GNNs) have gained increasing attention, as they possess the excellent capability of processing graph-related problems. In practice, hyperparameter optimisation (HPO) is critical for GNNs to achieve satisfactory results, but this process is costly because the evaluations of different hyperparameter settings require excessively training many GNNs. Many approaches have been proposed for HPO, which aims to identify promising hyperparameters efficiently. In particular, the genetic algorithm (GA) for HPO has been explored, which treats GNNs as a black-box model, of which only the outputs can be observed given a set of hyperparameters. However, because GNN models are sophisticated and the evaluations of hyperparameters on GNNs are expensive, GA requires advanced techniques to balance the exploration and exploitation of the search and make the optimisation more effective given limited computational resources. Therefore, we proposed a tree-structured mutation strategy for GA to alleviate this issue. Meanwhile, we reviewed the recent HPO works, which gives room for the idea of tree-structure to develop, and we hope our approach can further improve these HPO methods in the future.
Graph representation of structured data can facilitate the extraction of stereoscopic features, and it has demonstrated excellent ability when working with deep learning systems, the so-called Graph Neural Networks (GNNs). Choosing a promising archit ecture for constructing GNNs can be transferred to a hyperparameter optimisation problem, a very challenging task due to the size of the underlying search space and high computational cost for evaluating candidate GNNs. To address this issue, this research presents a novel genetic algorithm with a hierarchical evaluation strategy (HESGA), which combines the full evaluation of GNNs with a fast evaluation approach. By using full evaluation, a GNN is represented by a set of hyperparameter values and trained on a specified dataset, and root mean square error (RMSE) will be used to measure the quality of the GNN represented by the set of hyperparameter values (for regression problems). While in the proposed fast evaluation process, the training will be interrupted at an early stage, the difference of RMSE values between the starting and interrupted epochs will be used as a fast score, which implies the potential of the GNN being considered. To coordinate both types of evaluations, the proposed hierarchical strategy uses the fast evaluation in a lower level for recommending candidates to a higher level, where the full evaluation will act as a final assessor to maintain a group of elite individuals. To validate the effectiveness of HESGA, we apply it to optimise two types of deep graph neural networks. The experimental results on three benchmark datasets demonstrate its advantages compared to Bayesian hyperparameter optimization.
This paper discusses scalability of standard genetic programming (GP) and the probabilistic incremental program evolution (PIPE). To investigate the need for both effective mixing and linkage learning, two test problems are considered: ORDER problem, which is rather easy for any recombination-based GP, and TRAP or the deceptive trap problem, which requires the algorithm to learn interactions among subsets of terminals. The scalability results show that both GP and PIPE scale up polynomially with problem size on the simple ORDER problem, but they both scale up exponentially on the deceptive problem. This indicates that while standard recombination is sufficient when no interactions need to be considered, for some problems linkage learning is necessary. These results are in agreement with the lessons learned in the domain of binary-string genetic algorithms (GAs). Furthermore, the paper investigates the effects of introducing utnnecessary and irrelevant primitives on the performance of GP and PIPE.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا