ﻻ يوجد ملخص باللغة العربية
We study principal-agent problems in which a principal commits to an outcome-dependent payment scheme (a.k.a. contract) so as to induce an agent to take a costly, unobservable action. We relax the assumption that the principal perfectly knows the agent by considering a Bayesian setting where the agents type is unknown and randomly selected according to a given probability distribution, which is known to the principal. Each agents type is characterized by her own action costs and action-outcome distributions. In the literature on non-Bayesian principal-agent problems, considerable attention has been devoted to linear contracts, which are simple, pure-commission payment schemes that still provide nice approximation guarantees with respect to principal-optimal (possibly non-linear) contracts. While in non-Bayesian settings an optimal contract can be computed efficiently, this is no longer the case for our Bayesian principal-agent problems. This further motivates our focus on linear contracts, which can be optimized efficiently given their single-parameter nature. Our goal is to analyze the properties of linear contracts in Bayesian settings, in terms of approximation guarantees with respect to optimal contracts and general tractable contracts (i.e., efficiently-computable ones). First, we study the approximation guarantees of linear contracts with respect to optimal ones, showing that the former suffer from a multiplicative loss linear in the number of agents types. Nevertheless, we prove that linear contracts can still provide a constant multiplicative approximation $rho$ of the optimal principals expected utility, though at the expense of an exponentially-small additive loss $2^{-Omega(rho)}$. Then, we switch to tractable contracts, showing that, surprisingly, linear contracts perform well among them.
We consider the classic principal-agent model of contract theory, in which a principal designs an outcome-dependent compensation scheme to incentivize an agent to take a costly and unobservable action. When all of the model parameters---including the
Reinforcement learning (RL) has gained increasing interest since the demonstration it was able to reach human performance on video game benchmarks using deep Q-learning (DQN). The current consensus for training neural networks on such complex environ
A Bayes factor is proposed for testing whether the effect of a key predictor variable on the dependent variable is linear or nonlinear, possibly while controlling for certain covariates. The test can be used (i) when one is interested in quantifying
In sponsored search, advertisement (abbreviated ad) slots are usually sold by a search engine to an advertiser through an auction mechanism in which advertisers bid on keywords. In theory, auction mechanisms have many desirable economic properties. H
In this paper, we propose an analytical method for performing tractable approximate Gaussian inference (TAGI) in Bayesian neural networks. The method enables the analytical Gaussian inference of the posterior mean vector and diagonal covariance matri