ترغب بنشر مسار تعليمي؟ اضغط هنا

Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task. When working with massive data, it is desirable to perform stochastic optimization in parallel. Unfortunately, many existing stoch astic optimization algorithms cannot be parallelized efficiently. In this paper we show that one can rewrite the regularized risk minimization problem as an equivalent saddle-point problem, and propose an efficient distributed stochastic optimization (DSO) algorithm. We prove the algorithms rate of convergence; remarkably, our analysis shows that the algorithm scales almost linearly with the number of processors. We also verify with empirical evaluations that the proposed algorithm is competitive with other parallel, general purpose stochastic and batch optimization algorithms for regularized risk minimization.
Structured output prediction is an important machine learning problem both in theory and practice, and the max-margin Markov network (mcn) is an effective approach. All state-of-the-art algorithms for optimizing mcn objectives take at least $O(1/epsi lon)$ number of iterations to find an $epsilon$ accurate solution. Recent results in structured optimization suggest that faster rates are possible by exploiting the structure of the objective function. Towards this end citet{Nesterov05} proposed an excessive gap reduction technique based on Euclidean projections which converges in $O(1/sqrt{epsilon})$ iterations on strongly convex functions. Unfortunately when applied to mcn s, this approach does not admit graphical model factorization which, as in many existing algorithms, is crucial for keeping the cost per iteration tractable. In this paper, we present a new excessive gap reduction technique based on Bregman projections which admits graphical model factorization naturally, and converges in $O(1/sqrt{epsilon})$ iterations. Compared with existing algorithms, the convergence rate of our method has better dependence on $epsilon$ and other parameters of the problem, and can be easily kernelized.
Given $n$ points in a $d$ dimensional Euclidean space, the Minimum Enclosing Ball (MEB) problem is to find the ball with the smallest radius which contains all $n$ points. We give a $O(ndQcal/sqrt{epsilon})$ approximation algorithm for producing an e nclosing ball whose radius is at most $epsilon$ away from the optimum (where $Qcal$ is an upper bound on the norm of the points). This improves existing results using emph{coresets}, which yield a $O(nd/epsilon)$ greedy algorithm. Finding the Minimum Enclosing Convex Polytope (MECP) is a related problem wherein a convex polytope of a fixed shape is given and the aim is to find the smallest magnification of the polytope which encloses the given points. For this problem we present a $O(mndQcal/epsilon)$ approximation algorithm, where $m$ is the number of faces of the polytope. Our algorithms borrow heavily from convex duality and recently developed techniques in non-smooth optimization, and are in contrast with existing methods which rely on geometric arguments. In particular, we specialize the excessive gap framework of citet{Nesterov05a} to obtain our results.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا