ترغب بنشر مسار تعليمي؟ اضغط هنا

Mechanisms for a No-Regret Agent: Beyond the Common Prior

99   0   0.0 ( 0 )
 نشر من قبل Modibo Camara
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

A rich class of mechanism design problems can be understood as incomplete-information games between a principal who commits to a policy and an agent who responds, with payoffs determined by an unknown state of the world. Traditionally, these models require strong and often-impractical assumptions about beliefs (a common prior over the state). In this paper, we dispense with the common prior. Instead, we consider a repeated interaction where both the principal and the agent may learn over time from the state history. We reformulate mechanism design as a reinforcement learning problem and develop mechanisms that attain natural benchmarks without any assumptions on the state-generating process. Our results make use of novel behavioral assumptions for the agent -- centered around counterfactual internal regret -- that capture the spirit of rationality without relying on beliefs.



قيم البحث

اقرأ أيضاً

In the early $20^{th}$ century, Pigou observed that imposing a marginal cost tax on the usage of a public good induces a socially efficient level of use as an equilibrium. Unfortunately, such a Pigouvian tax may also induce other, socially inefficien t, equilibria. We observe that this social inefficiency may be unbounded, and study whether alternative tax structures may lead to milder losses in the worst case, i.e. to a lower price of anarchy. We show that no tax structure leads to bounded losses in the worst case. However, we do find a tax scheme that has a lower price of anarchy than the Pigouvian tax, obtaining tight lower and upper bounds in terms of a crucial parameter that we identify. We generalize our results to various scenarios that each offers an alternative to the use of a public road by private cars, such as ride sharing, or using a bus or a train.
64 - Yingkai Li 2021
We consider the model of the data broker selling information to a single agent to maximize his revenue. The agent has private valuation for the additional information, and upon receiving the signal from the data broker, the agent can conduct her own experiment to refine her posterior belief on the states with additional costs. In this paper, we show that in the optimal mechanism, the agent has no incentive to acquire any additional costly information under equilibrium. Still, the ability to acquire additional information distorts the incentives of the agent, and reduces the optimal revenue of the data broker. In addition, we show that under the separable valuation assumption, there is no distortion at the top, and posting a deterministic price for fully revealing the states is optimal when the prior distribution is sufficiently informative or the cost of acquiring additional information is sufficiently high, and is approximately optimal when the type distribution satisfies the monotone hazard rate condition.
We study online learning in repeated first-price auctions with censored feedback, where a bidder, only observing the winning bid at the end of each auction, learns to adaptively bid in order to maximize her cumulative payoff. To achieve this goal, th e bidder faces a challenging dilemma: if she wins the bid--the only way to achieve positive payoffs--then she is not able to observe the highest bid of the other bidders, which we assume is iid drawn from an unknown distribution. This dilemma, despite being reminiscent of the exploration-exploitation trade-off in contextual bandits, cannot directly be addressed by the existing UCB or Thompson sampling algorithms in that literature, mainly because contrary to the standard bandits setting, when a positive reward is obtained here, nothing about the environment can be learned. In this paper, by exploiting the structural properties of first-price auctions, we develop the first learning algorithm that achieves $O(sqrt{T}log^2 T)$ regret bound when the bidders private values are stochastically generated. We do so by providing an algorithm on a general class of problems, which we call monotone group contextual bandits, where the same regret bound is established under stochastically generated contexts. Further, by a novel lower bound argument, we characterize an $Omega(T^{2/3})$ lower bound for the case where the contexts are adversarially generated, thus highlighting the impact of the contexts generation mechanism on the fundamental learning limit. Despite this, we further exploit the structure of first-price auctions and develop a learning algorithm that operates sample-efficiently (and computationally efficiently) in the presence of adversarially generated private values. We establish an $O(sqrt{T}log^3 T)$ regret bound for this algorithm, hence providing a complete characterization of optimal learning guarantees for this problem.
Consider a player that in each round $t$ out of $T$ rounds chooses an action and observes the incurred cost after a delay of $d_{t}$ rounds. The cost functions and the delay sequence are chosen by an adversary. We show that even if the players algori thms lose their no regret property due to too large delays, the expected discounted ergodic distribution of play converges to the set of coarse correlated equilibrium (CCE) if the algorithms have no discounted-regret. For a zero-sum game, we show that no discounted-regret is sufficient for the discounted ergodic average of play to converge to the set of Nash equilibria. We prove that the FKM algorithm with $n$ dimensions achieves a regret of $Oleft(nT^{frac{3}{4}}+sqrt{n}T^{frac{1}{3}}D^{frac{1}{3}}right)$ and the EXP3 algorithm with $K$ arms achieves a regret of $Oleft(sqrt{ln Kleft(KT+Dright)}right)$ even when $D=sum_{t=1}^{T}d_{t}$ and $T$ are unknown. These bounds use a novel doubling trick that provably retains the regret bound for when $D$ and $T$ are known. Using these bounds, we show that EXP3 and FKM have no discounted-regret even for $d_{t}=Oleft(tlog tright)$. Therefore, the CCE of a finite or convex unknown game can be approximated even when only delayed bandit feedback is available via simulation.
The theory of reinforcement learning has focused on two fundamental problems: achieving low regret, and identifying $epsilon$-optimal policies. While a simple reduction allows one to apply a low-regret algorithm to obtain an $epsilon$-optimal policy and achieve the worst-case optimal rate, it is unknown whether low-regret algorithms can obtain the instance-optimal rate for policy identification. We show that this is not possible -- there exists a fundamental tradeoff between achieving low regret and identifying an $epsilon$-optimal policy at the instance-optimal rate. Motivated by our negative finding, we propose a new measure of instance-dependent sample complexity for PAC tabular reinforcement learning which explicitly accounts for the attainable state visitation distributions in the underlying MDP. We then propose and analyze a novel, planning-based algorithm which attains this sample complexity -- yielding a complexity which scales with the suboptimality gaps and the ``reachability of a state. We show that our algorithm is nearly minimax optimal, and on several examples that our instance-dependent sample complexity offers significant improvements over worst-case bounds.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا