No Arabic abstract
A gradient-free deterministic method is developed to solve global optimization problems for Lipschitz continuous functions defined in arbitrary path-wise connected compact sets in Euclidean spaces. The method can be regarded as granular sieving with synchronous analysis in both the domain and range of the objective function. With straightforward mathematical formulation applicable to both univariate and multivariate objective functions, the global minimum value and all the global minimizers are located through two decreasing sequences of compact sets in, respectively, the domain and range spaces. The algorithm is easy to implement with moderate computational cost. The method is tested against extensive benchmark functions in the literature. The experimental results show remarkable effectiveness and applicability of the algorithm.
In this paper, a novel multiagent based state transition optimization algorithm with linear convergence rate named MASTA is constructed. It first generates an initial population randomly and uniformly. Then, it applies the basic state transition algorithm (STA) to the population and generates a new population. After that, It computes the fitness values of all individuals and finds the best individuals in the new population. Moreover, it performs an effective communication operation and updates the population. With the above iterative process, the best optimal solution is found out. Experimental results based on some common benchmark functions and comparison with some stat-of-the-art optimization algorithms, the proposed MASTA algorithm has shown very superior and comparable performance.
We consider the zeroth-order optimization problem in the huge-scale setting, where the dimension of the problem is so large that performing even basic vector operations on the decision variables is infeasible. In this paper, we propose a novel algorithm, coined ZO-BCD, that exhibits favorable overall query complexity and has a much smaller per-iteration computational complexity. In addition, we discuss how the memory footprint of ZO-BCD can be reduced even further by the clever use of circulant measurement matrices. As an application of our new method, we propose the idea of crafting adversarial attacks on neural network based classifiers in a wavelet domain, which can result in problem dimensions of over 1.7 million. In particular, we show that crafting adversarial examples to audio classifiers in a wavelet domain can achieve the state-of-the-art attack success rate of 97.9%.
Sparse optimization is a central problem in machine learning and computer vision. However, this problem is inherently NP-hard and thus difficult to solve in general. Combinatorial search methods find the global optimal solution but are confined to small-sized problems, while coordinate descent methods are efficient but often suffer from poor local minima. This paper considers a new block decomposition algorithm that combines the effectiveness of combinatorial search methods and the efficiency of coordinate descent methods. Specifically, we consider a random strategy or/and a greedy strategy to select a subset of coordinates as the working set, and then perform a global combinatorial search over the working set based on the original objective function. We show that our method finds stronger stationary points than Amir Beck et al.s coordinate-wise optimization method. In addition, we establish the convergence rate of our algorithm. Our experiments on solving sparse regularized and sparsity constrained least squares optimization problems demonstrate that our method achieves state-of-the-art performance in terms of accuracy. For example, our method generally outperforms the well-known greedy pursuit method.
The paper proves convergence to global optima for a class of distributed algorithms for nonconvex optimization in network-based multi-agent settings. Agents are permitted to communicate over a time-varying undirected graph. Each agent is assumed to possess a local objective function (assumed to be smooth, but possibly nonconvex). The paper considers algorithms for optimizing the sum function. A distributed algorithm of the consensus+innovations type is proposed which relies on first-order information at the agent level. Under appropriate conditions on network connectivity and the cost objective, convergence to the set of global optima is achieved by an annealing-type approach, with decaying Gaussian noise independently added into each agents update step. It is shown that the proposed algorithm converges in probability to the set of global minima of the sum function.
In this paper, the optimization problem of the supervised distance preserving projection (SDPP) for data dimension reduction (DR) is considered, which is equivalent to a rank constrained least squares semidefinite programming (RCLSSDP). In order to overcome the difficulties caused by rank constraint, the difference-of-convex (DC) regularization strategy was employed, then the RCLSSDP is transferred into a series of least squares semidefinite programming with DC regularization (DCLSSDP). An inexact proximal DC algorithm with sieving strategy (s-iPDCA) is proposed for solving the DCLSSDP, whose subproblems are solved by the accelerated block coordinate descent (ABCD) method. Convergence analysis shows that the generated sequence of s-iPDCA globally converges to stationary points of the corresponding DC problem. To show the efficiency of our proposed algorithm for solving the RCLSSDP, the s-iPDCA is compared with classical proximal DC algorithm (PDCA) and the PDCA with extrapolation (PDCAe) by performing DR experiment on the COIL-20 database, the results show that the s-iPDCA outperforms the PDCA and the PDCAe in solving efficiency. Moreover, DR experiments for face recognition on the ORL database and the YaleB database demonstrate that the rank constrained kernel SDPP (RCKSDPP) is effective and competitive by comparing the recognition accuracy with kernel semidefinite SDPP (KSSDPP) and kernal principal component analysis (KPCA).