No Arabic abstract
Aggregation functions largely determine the convergence and diversity performance of multi-objective evolutionary algorithms in decomposition methods. Nevertheless, the traditional Tchebycheff function does not consider the matching relationship between the weight vectors and candidate solutions. In this paper, the concept of matching degree is proposed which employs vectorial angles between weight vectors and candidate solutions. Based on the matching degree, a new modified Tchebycheff aggregation function is proposed, which integrates matching degree into the Tchebycheff aggregation function. Moreover, the proposed decomposition method has the same functionality with the Tchebycheff aggregation function. Based on the proposed decomposition approach, a new multiobjective optimization algorithm named decomposition-based multi-objective state transition algorithm is proposed. Relevant experimental results show that the proposed algorithm is highly competitive in comparison with other state-of-the-art multiobjetive optimization algorithms.
In this paper, a novel multiagent based state transition optimization algorithm with linear convergence rate named MASTA is constructed. It first generates an initial population randomly and uniformly. Then, it applies the basic state transition algorithm (STA) to the population and generates a new population. After that, It computes the fitness values of all individuals and finds the best individuals in the new population. Moreover, it performs an effective communication operation and updates the population. With the above iterative process, the best optimal solution is found out. Experimental results based on some common benchmark functions and comparison with some stat-of-the-art optimization algorithms, the proposed MASTA algorithm has shown very superior and comparable performance.
In this article we develop a gradient-based algorithm for the solution of multiobjective optimization problems with uncertainties. To this end, an additional condition is derived for the descent direction in order to account for inaccuracies in the gradients and then incorporated in a subdivison algorithm for the computation of global solutions to multiobjective optimization problems. Convergence to a superset of the Pareto set is proved and an upper bound for the maximal distance to the set of substationary points is given. Besides the applicability to problems with uncertainties, the algorithm is developed with the intention to use it in combination with model order reduction techniques in order to efficiently solve PDE-constrained multiobjective optimization problems.
Recently, a deep reinforcement learning method is proposed to solve multiobjective optimization problem. In this method, the multiobjective optimization problem is decomposed to a number of single-objective optimization subproblems and all the subproblems are optimized in a collaborative manner. Each subproblem is modeled with a pointer network and the model is trained with reinforcement learning. However, when pointer network extracts the features of an instance, it ignores the underlying structure information of the input nodes. Thus, this paper proposes a multiobjective deep reinforcement learning method using decomposition and attention model to solve multiobjective optimization problem. In our method, each subproblem is solved by an attention model, which can exploit the structure features as well as node features of input nodes. The experiment results on multiobjective travelling salesman problem show the proposed algorithm achieves better performance compared with the previous method.
Sparse optimization is a central problem in machine learning and computer vision. However, this problem is inherently NP-hard and thus difficult to solve in general. Combinatorial search methods find the global optimal solution but are confined to small-sized problems, while coordinate descent methods are efficient but often suffer from poor local minima. This paper considers a new block decomposition algorithm that combines the effectiveness of combinatorial search methods and the efficiency of coordinate descent methods. Specifically, we consider a random strategy or/and a greedy strategy to select a subset of coordinates as the working set, and then perform a global combinatorial search over the working set based on the original objective function. We show that our method finds stronger stationary points than Amir Beck et al.s coordinate-wise optimization method. In addition, we establish the convergence rate of our algorithm. Our experiments on solving sparse regularized and sparsity constrained least squares optimization problems demonstrate that our method achieves state-of-the-art performance in terms of accuracy. For example, our method generally outperforms the well-known greedy pursuit method.
We study the convergence issue for inexact descent algorithm (employing general step sizes) for multiobjective optimizations on general Riemannian manifolds (without curvature constraints). Under the assumption of the local convexity/quasi-convexity, local/global convergence results are established. On the other hand, without the assumption of the local convexity/quasi-convexity, but under a Kurdyka-{L}ojasiewicz-like condition, local/global linear convergence results are presented, which seem new even in Euclidean spaces setting and improve sharply the corresponding results in [24] in the case when the multiobjective optimization is reduced to the scalar case. Finally, for the special case when the inexact descent algorithm employing Armijo rule, our results improve sharply/extend the corresponding ones in [3,2,38].