ترغب بنشر مسار تعليمي؟ اضغط هنا

The Parity Ray Regularizer for Pacing in Auction Markets

191   0   0.0 ( 0 )
 نشر من قبل Andrea Celli
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Budget-management systems are one of the key components of modern auction markets. Internet advertising platforms typically offer advertisers the possibility to pace the rate at which their budget is depleted, through budget-pacing mechanisms. We focus on multiplicative pacing mechanisms in an online setting in which a bidder is repeatedly confronted with a series of advertising opportunities. After collecting bids, each item is then allocated through a single-item, second-price auction. If there were no budgetary constraints, bidding truthfully would be an optimal choice for the advertiser. However, since their budget is limited, the advertiser may want to shade their bid downwards in order to preserve their budget for future opportunities, and to spread expenditures evenly over time. The literature on online pacing problems mostly focuses on the setting in which the bidder optimizes an additive separable objective, such as the total click-through rate or the revenue of the allocation. In many settings, however, bidders may also care about other objectives which oftentimes are non-separable, and therefore not amenable to traditional online learning techniques. Building on recent work, we study the frequent case in which advertisers seek to reach a certain distribution of impressions over a target population of users. We introduce a novel regularizer to achieve this desideratum, and show how to integrate it into an online mirror descent scheme attaining the optimal order of sub-linear regret compared to the optimal allocation in hindsight when inputs are drawn independently, from an unknown distribution. Moreover, we show that our approach can easily be incorporated in standard existing pacing systems that are not usually built for this objective. The effectiveness of our algorithm in internet advertising applications is confirmed by numerical experiments on real-world data.



قيم البحث

اقرأ أيضاً

In e-commerce advertising, it is crucial to jointly consider various performance metrics, e.g., user experience, advertiser utility, and platform revenue. Traditional auction mechanisms, such as GSP and VCG auctions, can be suboptimal due to their fi xed allocation rules to optimize a single performance metric (e.g., revenue or social welfare). Recently, data-driven auctions, learned directly from auction outcomes to optimize multiple performance metrics, have attracted increasing research interests. However, the procedure of auction mechanisms involves various discrete calculation operations, making it challenging to be compatible with continuous optimization pipelines in machine learning. In this paper, we design underline{D}eep underline{N}eural underline{A}uctions (DNAs) to enable end-to-end auction learning by proposing a differentiable model to relax the discrete sorting operation, a key component in auctions. We optimize the performance metrics by developing deep models to efficiently extract contexts from auctions, providing rich features for auction design. We further integrate the game theoretical conditions within the model design, to guarantee the stability of the auctions. DNAs have been successfully deployed in the e-commerce advertising system at Taobao. Experimental evaluation results on both large-scale data set as well as online A/B test demonstrated that DNAs significantly outperformed other mechanisms widely adopted in industry.
A classical trading experiment consists of a set of unit demand buyers and unit supply sellers with identical items. Each agents value or opportunity cost for the item is their private information and preferences are quasi-linear. Trade between agent s employs a double oral auction (DOA) in which both buyers and sellers call out bids or offers which an auctioneer recognizes. Transactions resulting from accepted bids and offers are recorded. This continues until there are no more acceptable bids or offers. Remarkably, the experiment consistently terminates in a Walrasian price. The main result of this paper is a mechanism in the spirit of the DOA that converges to a Walrasian equilibrium in a polynomial number of steps, thus providing a theoretical basis for the above-described empirical phenomenon. It is well-known that computation of a Walrasian equilibrium for this market corresponds to solving a maximum weight bipartite matching problem. The uncoordinated but rational responses of agents thus solve in a distributed fashion a maximum weight bipartite matching problem that is encoded by their private valuations. We show, furthermore, that every Walrasian equilibrium is reachable by some sequence of responses. This is in contrast to the well known auction algorithms for this problem which only allow one side to make offers and thus essentially choose an equilibrium that maximizes the surplus for the side making offers. Our results extend to the setting where not every agent pair is allowed to trade with each other.
The dynamics of financial markets are driven by the interactions between participants, as well as the trading mechanisms and regulatory frameworks that govern these interactions. Decision-makers would rather not ignore the impact of other participant s on these dynamics and should employ tools and models that take this into account. To this end, we demonstrate the efficacy of applying opponent-modeling in a number of simulated market settings. While our simulations are simplified representations of actual market dynamics, they provide an idealized playground in which our techniques can be demonstrated and tested. We present this work with the aim that our techniques could be refined and, with some effort, scaled up to the full complexity of real-world market scenarios. We hope that the results presented encourage practitioners to adopt opponent-modeling methods and apply them online systems, in order to enable not only reactive but also proactive decisions to be made.
Econometric inference allows an analyst to back out the values of agents in a mechanism from the rules of the mechanism and bids of the agents. This paper gives an algorithm to solve the problem of inferring the values of agents in a dominant-strateg y mechanism from the social choice function implemented by the mechanism and the per-unit prices paid by the agents (the agent bids are not observed). For single-dimensional agents, this inference problem is a multi-dimensional inversion of the payment identity and is feasible only if the payment identity is uniquely invertible. The inversion is unique for single-unit proportional weights social choice functions (common, for example, in bandwidth allocation); and its inverse can be found efficiently. This inversion is not unique for social choice functions that exhibit complementarities. Of independent interest, we extend a result of Rosen (1965), that the Nash equilbria of concave games are unique and pure, to an alternative notion of concavity based on Gale and Nikaido (1965).
We study the limits of an information intermediary in Bayesian auctions. Formally, we consider the standard single-item auction, with a revenue-maximizing seller and $n$ buyers with independent private values; in addition, we now have an intermediary who knows the buyers true values, and can map these to a public signal so as to try to increase buyer surplus. This model was proposed by Bergemann et al., who present a signaling scheme for the single-buyer setting that raises the optimal consumer surplus, by guaranteeing the item is always sold while ensuring the seller gets the same revenue as without signaling. Our work aims to understand how this result ports to the setting with multiple buyers. Our first result is an impossibility: We show that such a signaling scheme need not exist even for $n=2$ buyers with $2$-point valuation distributions. Indeed, no signaling scheme can always allocate the item to the highest-valued buyer while preserving any non-trivial fraction of the original consumer surplus; further, no signaling scheme can achieve consumer surplus better than a factor of $frac{1}{2}$ compared to the maximum achievable. These results are existential (and not computational) impossibilities, and thus provide a sharp separation between the single and multi-buyer settings. On the positive side, for discrete valuation distributions, we develop signaling schemes with good approximation guarantees for the consumer surplus compared to the maximum achievable, in settings where either the number of agents, or the support size of valuations, is small. Formally, for i.i.d. buyers, we present an $O(min(log n, K))$-approximation where $K$ is the support size of the valuations. Moreover, for general distributions, we present an $O(min(n log n, K^2))$-approximation. Our signaling schemes are conceptually simple and computable in polynomial (in $n$ and $K$) time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا