ترغب بنشر مسار تعليمي؟ اضغط هنا

Spatio-Temporal Pricing for Ridesharing Platforms

72   0   0.0 ( 0 )
 نشر من قبل Hongyao Ma
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Ridesharing platforms match drivers and riders to trips, using dynamic prices to balance supply and demand. A challenge is to set prices that are appropriately smooth in space and time, so that drivers with the flexibility to decide how to work will nevertheless choose to accept their dispatched trips, rather than drive to another area or wait for higher prices or a better trip. In this work, we propose a complete information model that is simple yet rich enough to incorporate spatial imbalance and temporal variations in supply and demand -- conditions that lead to market failures in todays platforms. We introduce the Spatio-Temporal Pricing (STP) mechanism. The mechanism is incentive-aligned, in that it is a subgame-perfect equilibrium for drivers to always accept their trip dispatches. From any history onward, the equilibrium outcome of the STP mechanism is welfare-optimal, envy-free, individually rational, budget balanced, and core-selecting. We also prove the impossibility of achieving the same economic properties in a dominant-strategy equilibrium. Simulation results show that the STP mechanism can achieve substantially improved social welfare and earning equity than a myopic mechanism.

قيم البحث

اقرأ أيضاً

An ever-important issue is protecting infrastructure and other valuable targets from a range of threats from vandalism to theft to piracy to terrorism. The defender can rarely afford the needed resources for a 100% protection. Thus, the key question is, how to provide the best protection using the limited available resources. We study a practically important class of security games that is played out in space and time, with targets and patrols moving on a real line. A central open question here is whether the Nash equilibrium (i.e., the minimax strategy of the defender) can be computed in polynomial time. We resolve this question in the affirmative. Our algorithm runs in time polynomial in the input size, and only polylogarithmic in the number of possible patrol locations (M). Further, we provide a continuous extension in which patrol locations can take arbitrary real values. Prior work obtained polynomial-time algorithms only under a substantial assumption, e.g., a constant number of rounds. Further, all these algorithms have running times polynomial in M, which can be very large.
This paper considers prior-independent mechanism design, in which a single mechanism is designed to achieve approximately optimal performance on every prior distribution from a given class. Most results in this literature focus on mechanisms with tru thtelling equilibria, a.k.a., truthful mechanisms. Feng and Hartline (2018) introduce the revelation gap to quantify the loss of the restriction to truthful mechanisms. We solve a main open question left in Feng and Hartline (2018); namely, we identify a non-trivial revelation gap for revenue maximization. Our analysis focuses on the canonical problem of selling a single item to a single agent with only access to a single sample from the agents valuation distribution. We identify the sample-bid mechanism (a simple non-truthful mechanism) and upper-bound its prior-independent approximation ratio by 1.835 (resp. 1.296) for regular (resp. MHR) distributions. We further prove that no truthful mechanism can achieve prior-independent approximation ratio better than 1.957 (resp. 1.543) for regular (resp. MHR) distributions. Thus, a non-trivial revelation gap is shown as the sample-bid mechanism outperforms the optimal prior-independent truthful mechanism. On the hardness side, we prove that no (possibly non-truthful) mechanism can achieve prior-independent approximation ratio better than 1.073 even for uniform distributions.
Given a batch of human computation tasks, a commonly ignored aspect is how the price (i.e., the reward paid to human workers) of these tasks must be set or varied in order to meet latency or cost constraints. Often, the price is set up-front and not modified, leading to either a much higher monetary cost than needed (if the price is set too high), or to a much larger latency than expected (if the price is set too low). Leveraging a pricing model from prior work, we develop algorithms to optimally set and then vary price over time in order to meet a (a) user-specified deadline while minimizing total monetary cost (b) user-specified monetary budget constraint while minimizing total elapsed time. We leverage techniques from decision theory (specifically, Markov Decision Processes) for both these problems, and demonstrate that our techniques lead to upto 30% reduction in cost over schemes proposed in prior work. Furthermore, we develop techniques to speed-up the computation, enabling users to leverage the price setting algorithms on-the-fly.
Standard ad auction formats do not immediately extend to settings where multiple size configurations and layouts are available to advertisers. In these settings, the sale of web advertising space increasingly resembles a combinatorial auction with co mplementarities, where truthful auctions such as the Vickrey-Clarke-Groves (VCG) can yield unacceptably low revenue. We therefore study core selecting auctions, which boost revenue by setting payments so that no group of agents, including the auctioneer, can jointly improve their utilities by switching to a different outcome. Our main result is a combinatorial algorithm that finds an approximate bidder optimal core point with almost linear number of calls to the welfare maximization oracle. Our algorithm is faster than previously-proposed heuristics in the literature and has theoretical guarantees. We conclude that core pricing is implementable even for very time sensitive practical use cases such as realtime auctions for online advertising and can yield more revenue. We justify this claim experimentally using the Microsoft Bing Ad Auction data, through which we show our core pricing algorithm generates almost 26% more revenue than VCG on average, about 9% more revenue than other core pricing rules known in the literature, and almost matches the revenue of the standard Generalized Second Price (GSP) auction.
A patient seller aims to sell a good to an impatient buyer (i.e., one who discounts utility over time). The buyer will remain in the market for a period of time $T$, and her private value is drawn from a publicly known distribution. What is the reven ue-optimal pricing-curve (sequence of (price, time) pairs) for the seller? Is randomization of help here? Is the revenue-optimal pricing-curve computable in polynomial time? We answer these questions in this paper. We give an efficient algorithm for computing the revenue-optimal pricing curve. We show that pricing curves, that post a price at each point of time and let the buyer pick her utility maximizing time to buy, are revenue-optimal among a much broader class of sequential lottery mechanisms: namely, mechanisms that allow the seller to post a menu of lotteries at each point of time cannot get any higher revenue than pricing curves. We also show that the even broader class of mechanisms that allow the menu of lotteries to be adaptively set, can earn strictly higher revenue than that of pricing curves, and the revenue gap can be as big as the support size of the buyers value distribution.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا