ﻻ يوجد ملخص باللغة العربية
Greedy algorithms which use only function evaluations are applied to convex optimization in a general Banach space $X$. Along with algorithms that use exact evaluations, algorithms with approximate evaluations are treated. A priori upper bounds for the convergence rate of the proposed algorithms are given. These bounds depend on the smoothness of the objective function and the sparsity or compressibility (with respect to a given dictionary) of a point in $X$ where the minimum is attained.
This paper is a follow up to the previous authors paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modif
The purpose of this article is to present the construction and basic properties of the general Bochner integral. The approach presented here is based on the ideas from the book The Bochner Integral by J. Mikusinski where the integral is presented for
Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems,
We show that for acylindrically hyperbolic groups $Gamma$ (with no nontrivial finite normal subgroups) and arbitrary unitary representation $rho$ of $Gamma$ in a (nonzero) uniformly convex Banach space the vector space $H^2_b(Gamma;rho)$ is infinite
Online convex optimization is a framework where a learner sequentially queries an external data source in order to arrive at the optimal solution of a convex function. The paradigm has gained significant popularity recently thanks to its scalability