No Arabic abstract
This paper presents a computational evolutionary game model to study and understand fraud dynamics in the consumption tax system. Players are cooperators if they correctly declare their value added tax (VAT), and are defectors otherwise. Each players payoff is influenced by the amount evaded and the subjective probability of being inspected by tax authorities. Since transactions between companies must be declared by both the buyer and seller, a strategy adopted by one influences the others payoff. We study the model with a well-mixed population and different scale-free networks. Model parameters were calibrated using real-world data of VAT declarations by businesses registered in the Canary Islands region of Spain. We analyzed several scenarios of audit probabilities for high and low transactions and their prevalence in the population, as well as social rewards and penalties to find the most efficient policy to increase the proportion of cooperators. Two major insights were found. First, increasing the subjective audit probability for low transactions is more efficient than increasing this probability for high transactions. Second, favoring social rewards for cooperators or alternative penalties for defectors can be effective policies, but their success depends on the distribution of the audit probability for low and high transactions.
In January 2019, DeepMind revealed AlphaStar to the world-the first artificial intelligence (AI) system to beat a professional player at the game of StarCraft II-representing a milestone in the progress of AI. AlphaStar draws on many areas of AI research, including deep learning, reinforcement learning, game theory, and evolutionary computation (EC). In this paper we analyze AlphaStar primarily through the lens of EC, presenting a new look at the system and relating it to many concepts in the field. We highlight some of its most interesting aspects-the use of Lamarckian evolution, competitive co-evolution, and quality diversity. In doing so, we hope to provide a bridge between the wider EC community and one of the most significant AI systems developed in recent times.
This paper tackles the short-term hydro-power unit commitment problem in a multi-reservoir system - a cascade-based operation scenario. For this, we propose a new mathematical modelling in which the goal is to maximize the total energy production of the hydro-power plant in a sub-daily operation, and, simultaneously, to maximize the total water content (volume) of reservoirs. For solving the problem, we discuss the Multi-objective Evolutionary Swarm Hybridization (MESH) algorithm, a recently proposed multi-objective swarm intelligence-based optimization method which has obtained very competitive results when compared to existing evolutionary algorithms in specific applications. The MESH approach has been applied to find the optimal water discharge and the power produced at the maximum reservoir volume for all possible combinations of turbines in a hydro-power plant. The performance of MESH has been compared with that of well-known evolutionary approaches such as NSGA-II, NSGA-III, SPEA2, and MOEA/D in a realistic problem considering data from a hydro-power energy system with two cascaded hydro-power plants in Brazil. Results indicate that MESH showed a superior performance than alternative multi-objective approaches in terms of efficiency and accuracy, providing a profit of $412,500 per month in a projection analysis carried out.
Multivariate time series (MTS) prediction plays a key role in many fields such as finance, energy and transport, where each individual time series corresponds to the data collected from a certain data source, so-called channel. A typical pipeline of building an MTS prediction model (PM) consists of selecting a subset of channels among all available ones, extracting features from the selected channels, and building a PM based on the extracted features, where each component involves certain optimization tasks, i.e., selection of channels, feature extraction (FE) methods, and PMs as well as configuration of the selected FE method and PM. Accordingly, pursuing the best prediction performance corresponds to optimizing the pipeline by solving all of its involved optimization problems. This is a non-trivial task due to the vastness of the solution space. Different from most of the existing works which target at optimizing certain components of the pipeline, we propose a novel evolutionary ensemble learning framework to optimize the entire pipeline in a holistic manner. In this framework, a specific pipeline is encoded as a candidate solution and a multi-objective evolutionary algorithm is applied under different population sizes to produce multiple Pareto optimal sets (POSs). Finally, selective ensemble learning is designed to choose the optimal subset of solutions from the POSs and combine them to yield final prediction by using greedy sequential selection and least square methods. We implement the proposed framework and evaluate our implementation on two real-world applications, i.e., electricity consumption prediction and air quality prediction. The performance comparison with state-of-the-art techniques demonstrates the superiority of the proposed approach.
The theory of evolutionary computation for discrete search spaces has made significant progress in the last ten years. This survey summarizes some of the most important recent results in this research area. It discusses fine-grained models of runtime analysis of evolutionary algorithms, highlights recent theoretical insights on parameter tuning and parameter control, and summarizes the latest advances for stochastic and dynamic problems. We regard how evolutionary algorithms optimize submodular functions and we give an overview over the large body of recent results on estimation of distribution algorithms. Finally, we present the state of the art of drift analysis, one of the most powerful analysis technique developed in this field.
Not all generate-and-test search algorithms are created equal. Bayesian Optimization (BO) invests a lot of computation time to generate the candidate solution that best balances the predicted value and the uncertainty given all previous data, taking increasingly more time as the number of evaluations performed grows. Evolutionary Algorithms (EA) on the other hand rely on search heuristics that typically do not depend on all previous data and can be done in constant time. Both the BO and EA community typically assess their performance as a function of the number of evaluations. However, this is unfair once we start to compare the efficiency of these classes of algorithms, as the overhead times to generate candidate solutions are significantly different. We suggest to measure the efficiency of generate-and-test search algorithms as the expected gain in the objective value per unit of computation time spent. We observe that the preference of an algorithm to be used can change after a number of function evaluations. We therefore propose a new algorithm, a combination of Bayesian optimization and an Evolutionary Algorithm, BEA for short, that starts with BO, then transfers knowledge to an EA, and subsequently runs the EA. We compare the BEA with BO and the EA. The results show that BEA outperforms both BO and the EA in terms of time efficiency, and ultimately leads to better performance on well-known benchmark objective functions with many local optima. Moreover, we test the three algorithms on nine test cases of robot learning problems and here again we find that BEA outperforms the other algorithms.