No Arabic abstract
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller knows the buyers marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case (Carroll, 2017), we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than $n^{1/2-epsilon}$. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
A patient seller aims to sell a good to an impatient buyer (i.e., one who discounts utility over time). The buyer will remain in the market for a period of time $T$, and her private value is drawn from a publicly known distribution. What is the revenue-optimal pricing-curve (sequence of (price, time) pairs) for the seller? Is randomization of help here? Is the revenue-optimal pricing-curve computable in polynomial time? We answer these questions in this paper. We give an efficient algorithm for computing the revenue-optimal pricing curve. We show that pricing curves, that post a price at each point of time and let the buyer pick her utility maximizing time to buy, are revenue-optimal among a much broader class of sequential lottery mechanisms: namely, mechanisms that allow the seller to post a menu of lotteries at each point of time cannot get any higher revenue than pricing curves. We also show that the even broader class of mechanisms that allow the menu of lotteries to be adaptively set, can earn strictly higher revenue than that of pricing curves, and the revenue gap can be as big as the support size of the buyers value distribution.
We study the power and limitations of posted prices in multi-unit markets, where agents arrive sequentially in an arbitrary order. We prove upper and lower bounds on the largest fraction of the optimal social welfare that can be guaranteed with posted prices, under a range of assumptions about the designers information and agents valuations. Our results provide insights about the relative power of uniform and non-uniform prices, the relative difficulty of different valuation classes, and the implications of different informational assumptions. Among other results, we prove constant-factor guarantees for agents with (symmetric) subadditive valuations, even in an incomplete-information setting and with uniform prices.
Dynamic pricing is used to maximize the revenue of a firm over a finite-period planning horizon, given that the firm may not know the underlying demand curve a priori. In emerging markets, in particular, firms constantly adjust pricing strategies to collect adequate demand information, which is a process known as price experimentation. To date, few papers have investigated the pricing decision process in a competitive environment with unknown demand curves, conditions that make analysis more complex. Asynchronous price updating can render the demand information gathered by price experimentation less informative or inaccurate, as it is nearly impossible for firms to remain informed about the latest prices set by competitors. Hence, firms may set prices using available, yet out-of-date, price information of competitors. In this paper, we design an algorithm to facilitate synchronized dynamic pricing, in which competitive firms estimate their demand functions based on observations and adjust their pricing strategies in a prescribed manner. The process is called learning and earning elsewhere in the literature. The goal is for the pricing decisions, determined by estimated demand functions, to converge to underlying equilibrium decisions. The main question that we answer is whether such a mechanism of periodically synchronized price updates is optimal for all firms. Furthermore, we ask whether prices converge to a stable state and how much regret firms incur by employing such a data-driven pricing algorithm.
We study online pricing algorithms for the Bayesian selection problem with production constraints and its generalization to the laminar matroid Bayesian online selection problem. Consider a firm producing (or receiving) multiple copies of different product types over time. The firm can offer the products to arriving buyers, where each buyer is interested in one product type and has a private valuation drawn independently from a possibly different but known distribution. Our goal is to find an adaptive pricing for serving the buyers that maximizes the expected social-welfare (or revenue) subject to two constraints. First, at any time the total number of sold items of each type is no more than the number of produced items. Second, the total number of sold items does not exceed the total shipping capacity. This problem is a special case of the well-known matroid Bayesian online selection problem studied in [Kleinberg and Weinberg, 2012], when the underlying matroid is laminar. We give the first Polynomial-Time Approximation Scheme (PTAS) for the above problem as well as its generalization to the laminar matroid Bayesian online selection problem when the depth of the laminar family is bounded by a constant. Our approach is based on rounding the solution of a hierarchy of linear programming relaxations that systematically strengthen the commonly used ex-ante linear programming formulation of these problems and approximate the optimum online solution with any degree of accuracy. Our rounding algorithm respects the relaxed constraints of higher-levels of the laminar tree only in expectation, and exploits the negative dependency of the selection rule of lower-levels to achieve the required concentration that guarantees the feasibility with high probability.
In 1979, Hylland and Zeckhauser cite{hylland} gave a simple and general scheme for implementing a one-sided matching market using the power of a pricing mechanism. Their method has nice properties -- it is incentive compatible in the large and produces an allocation that is Pareto optimal -- and hence it provides an attractive, off-the-shelf method for running an application involving such a market. With matching markets becoming ever more prevalant and impactful, it is imperative to finally settle the computational complexity of this scheme. We present the following partial resolution: 1. A combinatorial, strongly polynomial time algorithm for the special case of $0/1$ utilities. 2. An example that has only irrational equilibria, hence proving that this problem is not in PPAD. Furthermore, its equilibria are disconnected, hence showing that the problem does not admit a convex programming formulation. 3. A proof of membership of the problem in the class FIXP. We leave open the (difficult) question of determining if the problem is FIXP-hard. Settling the status of the special case when utilities are in the set ${0, {frac 1 2}, 1 }$ appears to be even more difficult.