ترغب بنشر مسار تعليمي؟ اضغط هنا

Multiagent based state transition algorithm for global optimization

492   0   0.0 ( 0 )
 نشر من قبل Xiaojun Zhou
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Xiaojun Zhou




اسأل ChatGPT حول البحث

In this paper, a novel multiagent based state transition optimization algorithm with linear convergence rate named MASTA is constructed. It first generates an initial population randomly and uniformly. Then, it applies the basic state transition algorithm (STA) to the population and generates a new population. After that, It computes the fitness values of all individuals and finds the best individuals in the new population. Moreover, it performs an effective communication operation and updates the population. With the above iterative process, the best optimal solution is found out. Experimental results based on some common benchmark functions and comparison with some stat-of-the-art optimization algorithms, the proposed MASTA algorithm has shown very superior and comparable performance.



قيم البحث

اقرأ أيضاً

105 - Tao Qian , Lei Dai , Liming Zhang 2021
A gradient-free deterministic method is developed to solve global optimization problems for Lipschitz continuous functions defined in arbitrary path-wise connected compact sets in Euclidean spaces. The method can be regarded as granular sieving with synchronous analysis in both the domain and range of the objective function. With straightforward mathematical formulation applicable to both univariate and multivariate objective functions, the global minimum value and all the global minimizers are located through two decreasing sequences of compact sets in, respectively, the domain and range spaces. The algorithm is easy to implement with moderate computational cost. The method is tested against extensive benchmark functions in the literature. The experimental results show remarkable effectiveness and applicability of the algorithm.
Aggregation functions largely determine the convergence and diversity performance of multi-objective evolutionary algorithms in decomposition methods. Nevertheless, the traditional Tchebycheff function does not consider the matching relationship betw een the weight vectors and candidate solutions. In this paper, the concept of matching degree is proposed which employs vectorial angles between weight vectors and candidate solutions. Based on the matching degree, a new modified Tchebycheff aggregation function is proposed, which integrates matching degree into the Tchebycheff aggregation function. Moreover, the proposed decomposition method has the same functionality with the Tchebycheff aggregation function. Based on the proposed decomposition approach, a new multiobjective optimization algorithm named decomposition-based multi-objective state transition algorithm is proposed. Relevant experimental results show that the proposed algorithm is highly competitive in comparison with other state-of-the-art multiobjetive optimization algorithms.
This paper addresses a distributed optimization problem in a communication network where nodes are active sporadically. Each active node applies some learning method to control its action to maximize the global utility function, which is defined as t he sum of the local utility functions of active nodes. We deal with stochastic optimization problem with the setting that utility functions are disturbed by some non-additive stochastic process. We consider a more challenging situation where the learning method has to be performed only based on a scalar approximation of the utility function, rather than its closed-form expression, so that the typical gradient descent method cannot be applied. This setting is quite realistic when the network is affected by some stochastic and time-varying process, and that each node cannot have the full knowledge of the network states. We propose a distributed optimization algorithm and prove its almost surely convergence to the optimum. Convergence rate is also derived with an additional assumption that the objective function is strongly concave. Numerical results are also presented to justify our claim.
Multiple-input multiple-output (MIMO) detection is a fundamental problem in wireless communications and it is strongly NP-hard in general. Massive MIMO has been recognized as a key technology in the fifth generation (5G) and beyond communication netw orks, which on one hand can significantly improve the communication performance, and on the other hand poses new challenges of solving the corresponding optimization problems due to the large problem size. While various efficient algorithms such as semidefinite relaxation (SDR) based approaches have been proposed for solving the small-scale MIMO detection problem, they are not suitable to solve the large-scale MIMO detection problem due to their high computational complexities. In this paper, we propose an efficient sparse quadratic programming (SQP) relaxation based algorithm for solving the large-scale MIMO detection problem. In particular, we first reformulate the MIMO detection problem as an SQP problem. By dropping the sparse constraint, the resulting relaxation problem shares the same global minimizer with the SQP problem. In sharp contrast to the SDRs for the MIMO detection problem, our relaxation does not contain any (positive semidefinite) matrix variable and the numbers of variables and constraints in our relaxation are significantly less than those in the SDRs, which makes it particularly suitable for the large-scale problem. Then we propose a projected Newton based quadratic penalty method to solve the relaxation problem, which is guaranteed to converge to the vector of transmitted signals under reasonable conditions. By extensive numerical experiments, when applied to solve large-scale problems, the proposed algorithm achieves better detection performance than a recently proposed generalized power method.
This technical note proposes the decentralized-partial-consensus optimization with inequality constraints, and a continuous-time algorithm based on multiple interconnected recurrent neural networks (RNNs) is derived to solve the obtained optimization problems. First, the partial-consensus matrix originating from Laplacian matrix is constructed to tackle the partial-consensus constraints. In addition, using the non-smooth analysis and Lyapunov-based technique, the convergence property about the designed algorithm is further guaranteed. Finally, the effectiveness of the obtained results is shown while several examples are presented.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا