ترغب بنشر مسار تعليمي؟ اضغط هنا

IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning

106   0   0.0 ( 0 )
 نشر من قبل Yuxi Huan
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Hyperparameter optimisation is a crucial process in searching the optimal machine learning model. The efficiency of finding the optimal hyperparameter settings has been a big concern in recent researches since the optimisation process could be time-consuming, especially when the objective functions are highly expensive to evaluate. In this paper, we introduce an intelligent evolutionary optimisation algorithm which applies machine learning technique to the traditional evolutionary algorithm to accelerate the overall optimisation process of tuning machine learning models in classification problems. We demonstrate our Intelligent Evolutionary Optimisation (IEO)in a series of controlled experiments, comparing with traditional evolutionary optimisation in hyperparameter tuning. The empirical study shows that our approach accelerates the optimisation speed by 30.40% on average and up to 77.06% in the best scenarios.



قيم البحث

اقرأ أيضاً

Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.
In recent years, graph neural networks (GNNs) have gained increasing attention, as they possess the excellent capability of processing graph-related problems. In practice, hyperparameter optimisation (HPO) is critical for GNNs to achieve satisfactory results, but this process is costly because the evaluations of different hyperparameter settings require excessively training many GNNs. Many approaches have been proposed for HPO, which aims to identify promising hyperparameters efficiently. In particular, the genetic algorithm (GA) for HPO has been explored, which treats GNNs as a black-box model, of which only the outputs can be observed given a set of hyperparameters. However, because GNN models are sophisticated and the evaluations of hyperparameters on GNNs are expensive, GA requires advanced techniques to balance the exploration and exploitation of the search and make the optimisation more effective given limited computational resources. Therefore, we proposed a tree-structured mutation strategy for GA to alleviate this issue. Meanwhile, we reviewed the recent HPO works, which gives room for the idea of tree-structure to develop, and we hope our approach can further improve these HPO methods in the future.
Graph representation of structured data can facilitate the extraction of stereoscopic features, and it has demonstrated excellent ability when working with deep learning systems, the so-called Graph Neural Networks (GNNs). Choosing a promising archit ecture for constructing GNNs can be transferred to a hyperparameter optimisation problem, a very challenging task due to the size of the underlying search space and high computational cost for evaluating candidate GNNs. To address this issue, this research presents a novel genetic algorithm with a hierarchical evaluation strategy (HESGA), which combines the full evaluation of GNNs with a fast evaluation approach. By using full evaluation, a GNN is represented by a set of hyperparameter values and trained on a specified dataset, and root mean square error (RMSE) will be used to measure the quality of the GNN represented by the set of hyperparameter values (for regression problems). While in the proposed fast evaluation process, the training will be interrupted at an early stage, the difference of RMSE values between the starting and interrupted epochs will be used as a fast score, which implies the potential of the GNN being considered. To coordinate both types of evaluations, the proposed hierarchical strategy uses the fast evaluation in a lower level for recommending candidates to a higher level, where the full evaluation will act as a final assessor to maintain a group of elite individuals. To validate the effectiveness of HESGA, we apply it to optimise two types of deep graph neural networks. The experimental results on three benchmark datasets demonstrate its advantages compared to Bayesian hyperparameter optimization.
Recently, the study of graph neural network (GNN) has attracted much attention and achieved promising performance in molecular property prediction. Most GNNs for molecular property prediction are proposed based on the idea of learning the representat ions for the nodes by aggregating the information of their neighbor nodes (e.g. atoms). Then, the representations can be passed to subsequent layers to deal with individual downstream tasks. Therefore, the architectures of GNNs can be considered as being composed of two core parts: graph-related layers and task-specific layers. Facing real-world molecular problems, the hyperparameter optimization for those layers are vital. Hyperparameter optimization (HPO) becomes expensive in this situation because evaluating candidate solutions requires massive computational resources to train and validate models. Furthermore, a larger search space often makes the HPO problems more challenging. In this research, we focus on the impact of selecting two types of GNN hyperparameters, those belonging to graph-related layers and those of task-specific layers, on the performance of GNN for molecular property prediction. In our experiments. we employed a state-of-the-art evolutionary algorithm (i.e., CMA-ES) for HPO. The results reveal that optimizing the two types of hyperparameters separately can gain the improvements on GNNs performance, but optimising both types of hyperparameters simultaneously will lead to predominant improvements. Meanwhile, our study also further confirms the importance of HPO for GNNs in molecular property prediction problems.
Machine learning techniques lend themselves as promising decision-making and analytic tools in a wide range of applications. Different ML algorithms have various hyper-parameters. In order to tailor an ML model towards a specific application, a large number of hyper-parameters should be tuned. Tuning the hyper-parameters directly affects the performance (accuracy and run-time). However, for large-scale search spaces, efficiently exploring the ample number of combinations of hyper-parameters is computationally challenging. Existing automated hyper-parameter tuning techniques suffer from high time complexity. In this paper, we propose HyP-ABC, an automatic innovative hybrid hyper-parameter optimization algorithm using the modified artificial bee colony approach, to measure the classification accuracy of three ML algorithms, namely random forest, extreme gradient boosting, and support vector machine. Compared to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited number of parameters to be tuned, making it worthwhile for real-world hyper-parameter optimization problems. We further compare our proposed HyP-ABC algorithm with state-of-the-art techniques. In order to ensure the robustness of the proposed method, the algorithm takes a wide range of feasible hyper-parameter values, and is tested using a real-world educational dataset.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا