ترغب بنشر مسار تعليمي؟ اضغط هنا

An Online Prediction Approach Based on Incremental Support Vector Machine for Dynamic Multiobjective Optimization

82   0   0.0 ( 0 )
 نشر من قبل Min Jiang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Real-world multiobjective optimization problems usually involve conflicting objectives that change over time, which requires the optimization algorithms to quickly track the Pareto optimal front (POF) when the environment changes. In recent years, evolutionary algorithms based on prediction models have been considered promising. However, most existing approaches only make predictions based on the linear correlation between a finite number of optimal solutions in two or three previous environments. These incomplete information extraction strategies may lead to low prediction accuracy in some instances. In this paper, a novel prediction algorithm based on incremental support vector machine (ISVM) is proposed, called ISVM-DMOEA. We treat the solving of dynamic multiobjective optimization problems (DMOPs) as an online learning process, using the continuously obtained optimal solution to update an incremental support vector machine without discarding the solution information at earlier time. ISVM is then used to filter random solutions and generate an initial population for the next moment. To overcome the obstacle of insufficient training samples, a synthetic minority oversampling strategy is implemented before the training of ISVM. The advantage of this approach is that the nonlinear correlation between solutions can be explored online by ISVM, and the information contained in all historical optimal solutions can be exploited to a greater extent. The experimental results and comparison with chosen state-of-the-art algorithms demonstrate that the proposed algorithm can effectively tackle dynamic multiobjective optimization problems.



قيم البحث

اقرأ أيضاً

107 - Weizhen Hu , Min Jiang , Xing Gao 2019
The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments. One of the promising approaches for solving the DMOPs is reusing the obtained Pareto optima l set (POS) to train prediction models via machine learning approaches. In this paper, we train an Incremental Support Vector Machine (ISVM) classifier with the past POS, and then the solutions of the DMOP we want to solve at the next moment are filtered through the trained ISVM classifier. A high-quality initial population will be generated by the ISVM classifier, and a variety of different types of population-based dynamic multi-objective optimization algorithms can benefit from the population. To verify this idea, we incorporate the proposed approach into three evolutionary algorithms, the multi-objective particle swarm optimization(MOPSO), Nondominated Sorting Genetic Algorithm II (NSGA-II), and the Regularity Model-based multi-objective estimation of distribution algorithm(RE-MEDA). We employ experiments to test these algorithms, and experimental results show the effectiveness.
Dynamic Multi-objective Optimization Problems (DMOPs) refer to optimization problems that objective functions will change with time. Solving DMOPs implies that the Pareto Optimal Set (POS) at different moments can be accurately found, and this is a v ery difficult job due to the dynamics of the optimization problems. The POS that have been obtained in the past can help us to find the POS of the next time more quickly and accurately. Therefore, in this paper we present a Support Vector Machine (SVM) based Dynamic Multi-Objective Evolutionary optimization Algorithm, called SVM-DMOEA. The algorithm uses the POS that has been obtained to train a SVM and then take the trained SVM to classify the solutions of the dynamic optimization problem at the next moment, and thus it is able to generate an initial population which consists of different individuals recognized by the trained SVM. The initial populuation can be fed into any population based optimization algorithm, e.g., the Nondominated Sorting Genetic Algorithm II (NSGA-II), to get the POS at that moment. The experimental results show the validity of our proposed approach.
Wasserstein textbf{D}istributionally textbf{R}obust textbf{O}ptimization (DRO) is concerned with finding decisions that perform well on data that are drawn from the worst-case probability distribution within a Wasserstein ball centered at a certain n ominal distribution. In recent years, it has been shown that various DRO formulations of learning models admit tractable convex reformulations. However, most existing works propose to solve these convex reformulations by general-purpose solvers, which are not well-suited for tackling large-scale problems. In this paper, we focus on a family of Wasserstein distributionally robust support vector machine (DRSVM) problems and propose two novel epigraphical projection-based incremental algorithms to solve them. The updates in each iteration of these algorithms can be computed in a highly efficient manner. Moreover, we show that the DRSVM problems considered in this paper satisfy a Holderian growth condition with explicitly determined growth exponents. Consequently, we are able to establish the convergence rates of the proposed incremental algorithms. Our numerical results indicate that the proposed methods are orders of magnitude faster than the state-of-the-art, and the performance gap grows considerably as the problem size increases.
188 - Weiyu Chen , Hisao Ishibuchi , 2021
Subset selection is an important component in evolutionary multiobjective optimization (EMO) algorithms. Clustering, as a classic method to group similar data points together, has been used for subset selection in some fields. However, clustering-bas ed methods have not been evaluated in the context of subset selection from solution sets obtained by EMO algorithms. In this paper, we first review some classic clustering algorithms. We also point out that another popular subset selection method, i.e., inverted generational distance (IGD)-based subset selection, can be viewed as clustering. Then, we perform a comprehensive experimental study to evaluate the performance of various clustering algorithms in different scenarios. Experimental results are analyzed in detail, and some suggestions about the use of clustering algorithms for subset selection are derived. Additionally, we demonstrate that decision makers preference can be introduced to clustering-based subset selection.
In this paper a data analytical approach featuring support vector machines (SVM) is employed to train a predictive model over an experimentaldataset, which consists of the most relevant studies for two-phase flow pattern prediction. The database for this study consists of flow patterns or flow regimes in gas-liquid two-phase flow. The term flow pattern refers to the geometrical configuration of the gas and liquid phases in the pipe. When gas and liquid flow simultaneously in a pipe, the two phases can distribute themselves in a variety of flow configurations. Gas-liquid two-phase flow occurs ubiquitously in various major industrial fields: petroleum, chemical, nuclear, and geothermal industries. The flow configurations differ from each other in the spatial distribution of the interface, resulting in different flow characteristics. Experimental results obtained by applying the presented methodology to different combinations of flow patterns demonstrate that the proposed approach is state-of-the-art alternatives by achieving 97% correct classification. The results suggest machine learning could be used as an effective tool for automatic detection and classification of gas-liquid flow patterns.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا