ترغب بنشر مسار تعليمي؟ اضغط هنا

DarwinML: A Graph-based Evolutionary Algorithm for Automated Machine Learning

172   0   0.0 ( 0 )
 نشر من قبل Fei Qi
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As an emerging field, Automated Machine Learning (AutoML) aims to reduce or eliminate manual operations that require expertise in machine learning. In this paper, a graph-based architecture is employed to represent flexible combinations of ML models, which provides a large searching space compared to tree-based and stacking-based architectures. Based on this, an evolutionary algorithm is proposed to search for the best architecture, where the mutation and heredity operators are the key for architecture evolution. With Bayesian hyper-parameter optimization, the proposed approach can automate the workflow of machine learning. On the PMLB dataset, the proposed approach shows the state-of-the-art performance compared with TPOT, Autostacker, and auto-sklearn. Some of the optimized models are with complex structures which are difficult to obtain in manual design.



قيم البحث

اقرأ أيضاً

93 - Jinjin Xu , Yaochu Jin , Wenli Du 2021
Data-driven evolutionary optimization has witnessed great success in solving complex real-world optimization problems. However, existing data-driven optimization algorithms require that all data are centrally stored, which is not always practical and may be vulnerable to privacy leakage and security threats if the data must be collected from different devices. To address the above issue, this paper proposes a federated data-driven evolutionary optimization framework that is able to perform data driven optimization when the data is distributed on multiple devices. On the basis of federated learning, a sorted model aggregation method is developed for aggregating local surrogates based on radial-basis-function networks. In addition, a federated surrogate management strategy is suggested by designing an acquisition function that takes into account the information of both the global and local surrogate models. Empirical studies on a set of widely used benchmark functions in the presence of various data distributions demonstrate the effectiveness of the proposed framework.
We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorit hm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be interpreted as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word memetic has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.
Population-based evolutionary algorithms have great potential to handle multiobjective optimisation problems. However, these algorithms depends largely on problem characteristics, and there is a need to improve their performance for a wider range of problems. References, which are often specified by the decision makers preference in different forms, are a very effective method to improve the performance of algorithms but have not been fully explored in literature. This paper proposes a novel framework for effective use of references to strengthen algorithms. This framework considers references as search targets which can be adjusted based on the information collected during the search. The proposed framework is combined with new strategies, such as reference adaptation and adaptive local mating, to solve different types of problems. The proposed algorithm is compared with state of the arts on a wide range of problems with diverse characteristics. The comparison and extensive sensitivity analysis demonstrate that the proposed algorithm is competitive and robust across different types of problems studied in this paper.
Many-objective evolutionary algorithms (MOEAs), especially the decomposition-based MOEAs, have attracted wide attention in recent years. Recent studies show that a well designed combination of the decomposition method and the domination method can im prove the performance ,i.e., convergence and diversity, of a MOEA. In this paper, a novel way of combining the decomposition method and the domination method is proposed. More precisely, a set of weight vectors is employed to decompose a given many-objective optimization problem(MaOP), and a hybrid method of the penalty-based boundary intersection function and dominance is proposed to compare local solutions within a subpopulation defined by a weight vector. A MOEA based on the hybrid method is implemented and tested on problems chosen from two famous test suites, i.e., DTLZ and WFG. The experimental results show that our algorithm is very competitive in dealing with MaOPs. Subsequently, our algorithm is extended to solve constraint MaOPs, and the constrained version of our algorithm also shows good performance in terms of convergence and diversity. These reveals that using dominance locally and combining it with the decomposition method can effectively improve the performance of a MOEA.
Automated machine learning (AutoML) aims to find optimal machine learning solutions automatically given a machine learning problem. It could release the burden of data scientists from the multifarious manual tuning process and enable the access of do main experts to the off-the-shelf machine learning solutions without extensive experience. In this paper, we review the current developments of AutoML in terms of three categories, automated feature engineering (AutoFE), automated model and hyperparameter learning (AutoMHL), and automated deep learning (AutoDL). State-of-the-art techniques adopted in the three categories are presented, including Bayesian optimization, reinforcement learning, evolutionary algorithm, and gradient-based approaches. We summarize popular AutoML frameworks and conclude with current open challenges of AutoML.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا