ترغب بنشر مسار تعليمي؟ اضغط هنا

Multidimensional Dynamic Pricing for Welfare Maximization

129   0   0.0 ( 0 )
 نشر من قبل Zhiwei Steven Wu
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of a seller dynamically pricing $d$ distinct types of indivisible goods, when faced with the online arrival of unit-demand buyers drawn independently from an unknown distribution. The goods are not in limited supply, but can only be produced at a limited rate and are costly to produce. The seller observes only the bundle of goods purchased at each day, but nothing else about the buyers valuation function. Our main result is a dynamic pricing algorithm for optimizing welfare (including the sellers cost of production) that runs in time and a number of rounds that are polynomial in $d$ and the approximation parameter. We are able to do this despite the fact that (i) the price-response function is not continuous, and even its fractional relaxation is a non-concave function of the prices, and (ii) the welfare is not observable to the seller. We derive this result as an application of a general technique for optimizing welfare over emph{divisible} goods, which is of independent interest. When buyers have strongly concave, Holder continuous valuation functions over $d$ divisible goods, we give a general polynomial time dynamic pricing technique. We are able to apply this technique to the setting of unit demand buyers despite the fact that in that setting the goods are not divisible, and the natural fractional relaxation of a unit demand valuation is not strongly concave. In order to apply our general technique, we introduce a novel price randomization procedure which has the effect of implicitly inducing buyers to regularize their valuations with a strongly concave function. Finally, we also extend our results to a limited-supply setting in which the number of copies of each good cannot be replenished.



قيم البحث

اقرأ أيضاً

In this paper we study the fundamental problems of maximizing a continuous non-monotone submodular function over the hypercube, both with and without coordinate-wise concavity. This family of optimization problems has several applications in machine learning, economics, and communication systems. Our main result is the first $frac{1}{2}$-approximation algorithm for continuous submodular function maximization; this approximation factor of $frac{1}{2}$ is the best possible for algorithms that only query the objective function at polynomially many points. For the special case of DR-submodular maximization, i.e. when the submodular functions is also coordinate wise concave along all coordinates, we provide a different $frac{1}{2}$-approximation algorithm that runs in quasilinear time. Both of these results improve upon prior work [Bian et al, 2017, Soma and Yoshida, 2017]. Our first algorithm uses novel ideas such as reducing the guaranteed approximation problem to analyzing a zero-sum game for each coordinate, and incorporates the geometry of this zero-sum game to fix the value at this coordinate. Our second algorithm exploits coordinate-wise concavity to identify a monotone equilibrium condition sufficient for getting the required approximation guarantee, and hunts for the equilibrium point using binary search. We further run experiments to verify the performance of our proposed algorithms in related machine learning applications.
The prevalence of e-commerce has made detailed customers personal information readily accessible to retailers, and this information has been widely used in pricing decisions. When involving personalized information, how to protect the privacy of such information becomes a critical issue in practice. In this paper, we consider a dynamic pricing problem over $T$ time periods with an emph{unknown} demand function of posted price and personalized information. At each time $t$, the retailer observes an arriving customers personal information and offers a price. The customer then makes the purchase decision, which will be utilized by the retailer to learn the underlying demand function. There is potentially a serious privacy concern during this process: a third party agent might infer the personalized information and purchase decisions from price changes from the pricing system. Using the fundamental framework of differential privacy from computer science, we develop a privacy-preserving dynamic pricing policy, which tries to maximize the retailer revenue while avoiding information leakage of individual customers information and purchasing decisions. To this end, we first introduce a notion of emph{anticipating} $(varepsilon, delta)$-differential privacy that is tailored to dynamic pricing problem. Our policy achieves both the privacy guarantee and the performance guarantee in terms of regret. Roughly speaking, for $d$-dimensional personalized information, our algorithm achieves the expected regret at the order of $tilde{O}(varepsilon^{-1} sqrt{d^3 T})$, when the customers information is adversarially chosen. For stochastic personalized information, the regret bound can be further improved to $tilde{O}(sqrt{d^2T} + varepsilon^{-2} d^2)$
We study the design of multi-item mechanisms that maximize expected profit with respect to a distribution over buyers values. In practice, a full description of the distribution is typically unavailable. Therefore, we study the setting where the desi gner only has samples from the distribution and the goal is to find a high-profit mechanism within a class of mechanisms. If the class is complex, a mechanism may have high average profit over the samples but low expected profit. This raises the question: how many samples are sufficient to ensure that a mechanisms average profit is close to its expected profit? To answer this question, we uncover structure shared by many pricing, auction, and lottery mechanisms: for any set of buyers values, profit is piecewise linear in the mechanisms parameters. Using this structure, we prove new bounds for mechanism classes not yet studied in the sample-based mechanism design literature and match or improve over the best known guarantees for many classes. Finally, we provide tools for optimizing an important tradeoff: more complex mechanisms typically have higher average profit over the samples than simpler mechanisms, but more samples are required to ensure that average profit nearly matches expected profit.
Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques h ave been proposed to aid with the choice of peer leaders or influencers in such interventions. Yet, traditional algorithms for influence maximization have not been designed with these interventions in mind. As a result, they may disproportionately exclude minority communities from the benefits of the intervention. This has motivated research on fair influence maximization. Existing techniques come with two major drawbacks. First, they require committing to a single fairness measure. Second, these measures are typically imposed as strict constraints leading to undesirable properties such as wastage of resources. To address these shortcomings, we provide a principled characterization of the properties that a fair influence maximization algorithm should satisfy. In particular, we propose a framework based on social welfare theory, wherein the cardinal utilities derived by each community are aggregated using the isoelastic social welfare functions. Under this framework, the trade-off between fairness and efficiency can be controlled by a single inequality aversion design parameter. We then show under what circumstances our proposed principles can be satisfied by a welfare function. The resulting optimization problem is monotone and submodular and can be solved efficiently with optimality guarantees. Our framework encompasses as special cases leximin and proportional fairness. Extensive experiments on synthetic and real world datasets including a case study on landslide risk management demonstrate the efficacy of the proposed framework.
We study the problem of matching agents who arrive at a marketplace over time and leave after d time periods. Agents can only be matched while they are present in the marketplace. Each pair of agents can yield a different match value, and the planner s goal is to maximize the total value over a finite time horizon. We study matching algorithms that perform well over any sequence of arrivals when there is no a priori information about the match values or arrival times. Our main contribution is a 1/4-competitive algorithm. The algorithm randomly selects a subset of agents who will wait until right before their departure to get matched, and maintains a maximum-weight matching with respect to the other agents. The primal-dual analysis of the algorithm hinges on a careful comparison between the initial dual value associated with an agent when it first arrives, and the final value after d time steps. It is also shown that no algorithm is 1/2-competitive. We extend the model to the case in which departure times are drawn i.i.d from a distribution with non-decreasing hazard rate, and establish a 1/8-competitive algorithm in this setting. Finally we show on real-world data that a modified version of our algorithm performs well in practice.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا