ترغب بنشر مسار تعليمي؟ اضغط هنا

Convex optimization on Banach Spaces

184   0   0.0 ( 0 )
 نشر من قبل Vladimir Temlyakov
 تاريخ النشر 2014
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Greedy algorithms which use only function evaluations are applied to convex optimization in a general Banach space $X$. Along with algorithms that use exact evaluations, algorithms with approximate evaluations are treated. A priori upper bounds for the convergence rate of the proposed algorithms are given. These bounds depend on the smoothness of the objective function and the sparsity or compressibility (with respect to a given dictionary) of a point in $X$ where the minimum is attained.



قيم البحث

اقرأ أيضاً

215 - V.N. Temlyakov 2012
This paper is a follow up to the previous authors paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modif ied there three the most popular in nonlinear approximation in Banach spaces greedy algorithms -- Weak Chebyshev Greedy Algorithm, Weak Greedy Algorithm with Free Relaxation and Weak Relaxed Greedy Algorithm -- for solving convex optimization problems. We continue to study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. In this paper we concentrate on greedy algorithms that provide expansions, which means that the approximant at the $m$th iteration is equal to the sum of the approximant from the previous iteration ($(m-1)$th iteration) and one element from the dictionary with an appropriate coefficient. The problem of greedy expansions of elements of a Banach space is well studied in nonlinear approximation theory. At a first glance the setting of a problem of expansion of a given element and the setting of the problem of expansion in an optimization problem are very different. However, it turns out that the same technique can be used for solving both problems. We show how the technique developed in nonlinear approximation theory, in particular, the greedy expansions technique can be adjusted for finding a sparse solution of an optimization problem given by an expansion with respect to a given dictionary.
219 - Piotr Mikusinski 2014
The purpose of this article is to present the construction and basic properties of the general Bochner integral. The approach presented here is based on the ideas from the book The Bochner Integral by J. Mikusinski where the integral is presented for functions defined on $mathbb{R}^N$. In this article we present a more general and simplified construction of the Bochner integral on abstract measure spaces. An extension of the construction to functions with values in a locally convex space is also considered.
Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization. We introduce disciplined parametrized programming, a subset of disciplined convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solvers solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire convex program. We implement our methodology in version 1.1 of CVXPY, a popular Python-embedded DSL for convex optimization, and additionally implement differentiable layers for disciplined convex programs in PyTorch and TensorFlow 2.0. Our implementation significantly lowers the barrier to using convex optimization problems in differentiable programs. We present applications in linear machine learning models and in stochastic control, and we show that our layer is competitive (in execution time) compared to specialized differentiable solvers from past work.
We show that for acylindrically hyperbolic groups $Gamma$ (with no nontrivial finite normal subgroups) and arbitrary unitary representation $rho$ of $Gamma$ in a (nonzero) uniformly convex Banach space the vector space $H^2_b(Gamma;rho)$ is infinite dimensional. The result was known for the regular representations on $ell^p(Gamma)$ with $1<p<infty$ by a different argument. But our result is new even for a non-abelian free group in this great generality for representations, and also the case for acylindrically hyperbolic groups follows as an application.
79 - Jiaming Xu , Kuang Xu , Dana Yang 2021
Online convex optimization is a framework where a learner sequentially queries an external data source in order to arrive at the optimal solution of a convex function. The paradigm has gained significant popularity recently thanks to its scalability in large-scale optimization and machine learning. The repeated interactions, however, expose the learner to privacy risks from eavesdropping adversary that observe the submitted queries. In this paper, we study how to optimally obfuscate the learners queries in first-order online convex optimization, so that their learned optimal value is provably difficult to estimate for the eavesdropping adversary. We consider two formulations of learner privacy: a Bayesian formulation in which the convex function is drawn randomly, and a minimax formulation in which the function is fixed and the adversarys probability of error is measured with respect to a minimax criterion. We show that, if the learner wants to ensure the probability of accurate prediction by the adversary be kept below $1/L$, then the overhead in query complexity is additive in $L$ in the minimax formulation, but multiplicative in $L$ in the Bayesian formulation. Compared to existing learner-private sequential learning models with binary feedback, our results apply to the significantly richer family of general convex functions with full-gradient feedback. Our proofs are largely enabled by tools from the theory of Dirichlet processes, as well as more sophisticated lines of analysis aimed at measuring the amount of information leakage under a full-gradient oracle.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا