ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian Agency: Linear versus Tractable Contracts

73   0   0.0 ( 0 )
 نشر من قبل Matteo Castiglioni
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study principal-agent problems in which a principal commits to an outcome-dependent payment scheme (a.k.a. contract) so as to induce an agent to take a costly, unobservable action. We relax the assumption that the principal perfectly knows the agent by considering a Bayesian setting where the agents type is unknown and randomly selected according to a given probability distribution, which is known to the principal. Each agents type is characterized by her own action costs and action-outcome distributions. In the literature on non-Bayesian principal-agent problems, considerable attention has been devoted to linear contracts, which are simple, pure-commission payment schemes that still provide nice approximation guarantees with respect to principal-optimal (possibly non-linear) contracts. While in non-Bayesian settings an optimal contract can be computed efficiently, this is no longer the case for our Bayesian principal-agent problems. This further motivates our focus on linear contracts, which can be optimized efficiently given their single-parameter nature. Our goal is to analyze the properties of linear contracts in Bayesian settings, in terms of approximation guarantees with respect to optimal contracts and general tractable contracts (i.e., efficiently-computable ones). First, we study the approximation guarantees of linear contracts with respect to optimal ones, showing that the former suffer from a multiplicative loss linear in the number of agents types. Nevertheless, we prove that linear contracts can still provide a constant multiplicative approximation $rho$ of the optimal principals expected utility, though at the expense of an exponentially-small additive loss $2^{-Omega(rho)}$. Then, we switch to tractable contracts, showing that, surprisingly, linear contracts perform well among them.



قيم البحث

اقرأ أيضاً

We consider the classic principal-agent model of contract theory, in which a principal designs an outcome-dependent compensation scheme to incentivize an agent to take a costly and unobservable action. When all of the model parameters---including the full distribution over principal rewards resulting from each agent action---are known to the designer, an optimal contract can in principle be computed by linear programming. In addition to their demanding informational requirements, such optimal contracts are often complex and unintuitive, and do not resemble contracts used in practice. This paper examines contract theory through the theoretical computer science lens, with the goal of developing novel theory to explain and justify the prevalence of relatively simple contracts, such as linear (pure commission) contracts. First, we consider the case where the principal knows only the first moment of each actions reward distribution, and we prove that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments. Second, we study linear contracts from a worst-case approximation perspective, and prove several tight parameterized approximation bounds.
Reinforcement learning (RL) has gained increasing interest since the demonstration it was able to reach human performance on video game benchmarks using deep Q-learning (DQN). The current consensus for training neural networks on such complex environ ments is to rely on gradient-based optimization. Although alternative Bayesian deep learning methods exist, most of them still rely on gradient-based optimization, and they typically do not scale on benchmarks such as the Atari game environment. Moreover none of these approaches allow performing the analytical inference for the weights and biases defining the neural network. In this paper, we present how we can adapt the temporal difference Q-learning framework to make it compatible with the tractable approximate Gaussian inference (TAGI), which allows learning the parameters of a neural network using a closed-form analytical method. Throughout the experiments with on- and off-policy reinforcement learning approaches, we demonstrate that TAGI can reach a performance comparable to backpropagation-trained networks while using fewer hyperparameters, and without relying on gradient-based optimization.
107 - Joris Mulder 2021
A Bayes factor is proposed for testing whether the effect of a key predictor variable on the dependent variable is linear or nonlinear, possibly while controlling for certain covariates. The test can be used (i) when one is interested in quantifying the relative evidence in the data of a linear versus a nonlinear relationship and (ii) to quantify the evidence in the data in favor of a linear relationship (useful when building linear models based on transformed variables). Under the nonlinear model, a Gaussian process prior is employed using a parameterization similar to Zellners $g$ prior resulting in a scale-invariant test. Moreover a Bayes factor is proposed for one-sided testing of whether the nonlinear effect is consistently positive, consistently negative, or neither. Applications are provides from various fields including social network research and education.
In sponsored search, advertisement (abbreviated ad) slots are usually sold by a search engine to an advertiser through an auction mechanism in which advertisers bid on keywords. In theory, auction mechanisms have many desirable economic properties. H owever, keyword auctions have a number of limitations including: the uncertainty in payment prices for advertisers; the volatility in the search engines revenue; and the weak loyalty between advertiser and search engine. In this paper we propose a special ad option that alleviates these problems. In our proposal, an advertiser can purchase an option from a search engine in advance by paying an upfront fee, known as the option price. He then has the right, but no obligation, to purchase among the pre-specified set of keywords at the fixed cost-per-clicks (CPCs) for a specified number of clicks in a specified period of time. The proposed option is closely related to a special exotic option in finance that contains multiple underlying assets (multi-keyword) and is also multi-exercisable (multi-click). This novel structure has many benefits: advertisers can have reduced uncertainty in advertising; the search engine can improve the advertisers loyalty as well as obtain a stable and increased expected revenue over time. Since the proposed ad option can be implemented in conjunction with the existing keyword auctions, the option price and corresponding fixed CPCs must be set such that there is no arbitrage between the two markets. Option pricing methods are discussed and our experimental results validate the development. Compared to keyword auctions, a search engine can have an increased expected revenue by selling an ad option.
In this paper, we propose an analytical method for performing tractable approximate Gaussian inference (TAGI) in Bayesian neural networks. The method enables the analytical Gaussian inference of the posterior mean vector and diagonal covariance matri x for weights and biases. The method proposed has a computational complexity of $mathcal{O}(n)$ with respect to the number of parameters $n$, and the tests performed on regression and classification benchmarks confirm that, for a same network architecture, it matches the performance of existing methods relying on gradient backpropagation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا