ﻻ يوجد ملخص باللغة العربية
We investigate the problem of online learning, which has gained significant attention in recent years due to its applicability in a wide range of fields from machine learning to game theory. Specifically, we study the online optimization of mixable loss functions in a dynamic environment. We introduce online mixture schemes that asymptotically achieves the performance of the best dynamic estimation sequence of the switching oracle with optimal regret redundancies. The best dynamic estimation sequence that we compete against is selected in hindsight with full observation of the loss functions and is allowed to select different optimal estimations in different time intervals (segments). We propose two mixtures in our work. Firstly, we propose a tractable polynomial time complexity algorithm that can achieve the optimal redundancy of the intractable brute force approach. Secondly, we propose an efficient logarithmic time complexity algorithm that can achieve the optimal redundancy up to a constant multiplicity gap. Our results are guaranteed to hold in a strong deterministic sense in an individual sequence manner.
We introduce a generic template for developing regret minimization algorithms in the Stochastic Shortest Path (SSP) model, which achieves minimax optimal regret as long as certain properties are ensured. The key of our analysis is a new technique cal
Cascading bandit (CB) is a popular model for web search and online advertising, where an agent aims to learn the $K$ most attractive items out of a ground set of size $L$ during the interaction with a user. However, the stationary CB model may be too
Bandit problems with linear or concave reward have been extensively studied, but relatively few works have studied bandits with non-concave reward. This work considers a large family of bandit problems where the unknown underlying reward function is
It is well understood that classification algorithms, for example, for deciding on loan applications, cannot be evaluated for fairness without taking context into account. We examine what can be learned from a fairness oracle equipped with an underly
Given a data set $mathcal{D}$ containing millions of data points and a data consumer who is willing to pay for $$X$ to train a machine learning (ML) model over $mathcal{D}$, how should we distribute this $$X$ to each data point to reflect its value?