Lipschitz and Comparator-Norm Adaptivity in Online Learning


الملخص بالإنكليزية

We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient are constrained. The goal is to simultaneously adapt to both the sequence of gradients and the comparator. We first develop parameter-free and scale-free algorithms for a simplified setting with hints. We present t

تحميل البحث