ترغب بنشر مسار تعليمي؟ اضغط هنا

The best of both worlds: stochastic and adversarial episodic MDPs with unknown transition

88   0   0.0 ( 0 )
 نشر من قبل Tiancheng Jin
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the best-of-both-worlds problem for learning an episodic Markov Decision Process through $T$ episodes, with the goal of achieving $widetilde{mathcal{O}}(sqrt{T})$ regret when the losses are adversarial and simultaneously $mathcal{O}(text{polylog}(T))$ regret when the losses are (almost) stochastic. Recent work by [Jin and Luo, 2020] achieves this goal when the fixed transition is known, and leaves the case of unknown transition as a major open question. In this work, we resolve this open problem by using the same Follow-the-Regularized-Leader ($text{FTRL}$) framework together with a set of new techniques. Specifically, we first propose a loss-shifting trick in the $text{FTRL}$ analysis, which greatly simplifies the approach of [Jin and Luo, 2020] and already improves their results for the known transition case. Then, we extend this idea to the unknown transition case and develop a novel analysis which upper bounds the transition estimation error by (a fraction of) the regret itself in the stochastic setting, a key property to ensure $mathcal{O}(text{polylog}(T))$ regret.



قيم البحث

اقرأ أيضاً

124 - Tiancheng Jin , Haipeng Luo 2020
This work studies the problem of learning episodic Markov Decision Processes with known transition and bandit feedback. We develop the first algorithm with a ``best-of-both-worlds guarantee: it achieves $mathcal{O}(log T)$ regret when the losses are stochastic, and simultaneously enjoys worst-case robustness with $tilde{mathcal{O}}(sqrt{T})$ regret even when the losses are adversarial, where $T$ is the number of episodes. More generally, it achieves $tilde{mathcal{O}}(sqrt{C})$ regret in an intermediate setting where the losses are corrupted by a total amount of $C$. Our algorithm is based on the Follow-the-Regularized-Leader method from Zimin and Neu (2013), with a novel hybrid regularizer inspired by recent works of Zimmert et al. (2019a, 2019b) for the special case of multi-armed bandits. Crucially, our regularizer admits a non-diagonal Hessian with a highly complicated inverse. Analyzing such a regularizer and deriving a particular self-bounding regret guarantee is our key technical contribution and might be of independent interest.
We consider the problem of learning in episodic finite-horizon Markov decision processes with an unknown transition function, bandit feedback, and adversarial losses. We propose an efficient algorithm that achieves $mathcal{tilde{O}}(L|X|sqrt{|A|T})$ regret with high probability, where $L$ is the horizon, $|X|$ is the number of states, $|A|$ is the number of actions, and $T$ is the number of episodes. To the best of our knowledge, our algorithm is the first to ensure $mathcal{tilde{O}}(sqrt{T})$ regret in this challenging setting; in fact it achieves the same regret bound as (Rosenberg & Mansour, 2019a) that considers an easier setting with full-information feedback. Our key technical contributions are two-fold: a tighter confidence set for the transition function, and an optimistic loss estimator that is inversely weighted by an $textit{upper occupancy bound}$.
Interpretability techniques aim to provide the rationale behind a models decision, typically by explaining either an individual prediction (local explanation, e.g. `why is this patient diagnosed with this condition) or a class of predictions (global explanation, e.g. `why are patients diagnosed with this condition in general). While there are many methods focused on either one, few frameworks can provide both local and global explanations in a consistent manner. In this work, we combine two powerful existing techniques, one local (Integrated Gradients, IG) and one global (Testing with Concept Activation Vectors), to provide local, and global concept-based explanations. We first validate our idea using two synthetic datasets with a known ground truth, and further demonstrate with a benchmark natural image dataset. We test our method with various concepts, target classes, model architectures and IG baselines. We show that our method improves global explanations over TCAV when compared to ground truth, and provides useful insights. We hope our work provides a step towards building bridges between many existing local and global methods to get the best of both worlds.
We consider the problem of fair allocation of indivisible items among $n$ agents with additive valuations, when agents have equal entitlements to the goods, and there are no transfers. Best-of-Both-Worlds (BoBW) fairness mechanisms aim to give all ag ents both an ex-ante guarantee (such as getting the proportional share in expectation) and an ex-post guarantee. Prior BoBW results have focused on ex-post guarantees that are based on the up to one item paradigm, such as envy-free up to one item (EF1). In this work we attempt to give every agent a high value ex-post, and specifically, a constant fraction of his maximin share (MMS). The up to one item paradigm fails to give such a guarantee, and it is not difficult to present examples in which previous BoBW mechanisms give agents only a $frac{1}{n}$ fraction of their MMS. Our main result is a deterministic polynomial time algorithm that computes a distribution over allocations that is ex-ante proportional, and ex-post, every allocation gives every agent at least his proportional share up to one item, and more importantly, at least half of his MMS. Moreover, this last ex-post guarantee holds even with respect to a more demanding notion of a share, introduced in this paper, that we refer to as the truncated proportional share (TPS). Our guarantees are nearly best possible, in the sense that one cannot guarantee agents more than their proportional share ex-ante, and one cannot guarantee agents more than a $frac{n}{2n-1}$ fraction of their TPS ex-post.
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then out-performed by th e more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT14 English to French and English to German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا