No Arabic abstract
Dynamic pricing is used to maximize the revenue of a firm over a finite-period planning horizon, given that the firm may not know the underlying demand curve a priori. In emerging markets, in particular, firms constantly adjust pricing strategies to collect adequate demand information, which is a process known as price experimentation. To date, few papers have investigated the pricing decision process in a competitive environment with unknown demand curves, conditions that make analysis more complex. Asynchronous price updating can render the demand information gathered by price experimentation less informative or inaccurate, as it is nearly impossible for firms to remain informed about the latest prices set by competitors. Hence, firms may set prices using available, yet out-of-date, price information of competitors. In this paper, we design an algorithm to facilitate synchronized dynamic pricing, in which competitive firms estimate their demand functions based on observations and adjust their pricing strategies in a prescribed manner. The process is called learning and earning elsewhere in the literature. The goal is for the pricing decisions, determined by estimated demand functions, to converge to underlying equilibrium decisions. The main question that we answer is whether such a mechanism of periodically synchronized price updates is optimal for all firms. Furthermore, we ask whether prices converge to a stable state and how much regret firms incur by employing such a data-driven pricing algorithm.
We consider a robust version of the revenue maximization problem, where a single seller wishes to sell $n$ items to a single unit-demand buyer. In this robust version, the seller knows the buyers marginal value distribution for each item separately, but not the joint distribution, and prices the items to maximize revenue in the worst case over all compatible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case (Carroll, 2017), we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than $n^{1/2-epsilon}$. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently.
This paper, by comparing three potential energy trading systems, studies the feasibility of integrating a community energy storage (CES) device with consumer-owned photovoltaic (PV) systems for demand-side management of a residential neighborhood area network. We consider a fully-competitive CES operator in a non-cooperative Stackelberg game, a benevolent CES operator that has socially favorable regulations with competitive users, and a centralized cooperative CES operator that minimizes the total community energy cost. The former two game-theoretic systems consider that the CES operator first maximizes their revenue by setting a price signal and trading energy with the grid. Then the users with PV panels play a non-cooperative repeated game following the actions of the CES operator to trade energy with the CES device and the grid to minimize energy costs. The centralized CES operator cooperates with the users to minimize the total community energy cost without appropriate incentives. The non-cooperative Stackelberg game with the fully-competitive CES operator has a unique Stackelberg equilibrium at which the CES operator maximizes revenue and users obtain unique Pareto-optimal Nash equilibrium CES energy trading strategies. Extensive simulations show that the fully-competitive CES model gives the best trade-off of operating environment between the CES operator and the users.
The prevalence of e-commerce has made detailed customers personal information readily accessible to retailers, and this information has been widely used in pricing decisions. When involving personalized information, how to protect the privacy of such information becomes a critical issue in practice. In this paper, we consider a dynamic pricing problem over $T$ time periods with an emph{unknown} demand function of posted price and personalized information. At each time $t$, the retailer observes an arriving customers personal information and offers a price. The customer then makes the purchase decision, which will be utilized by the retailer to learn the underlying demand function. There is potentially a serious privacy concern during this process: a third party agent might infer the personalized information and purchase decisions from price changes from the pricing system. Using the fundamental framework of differential privacy from computer science, we develop a privacy-preserving dynamic pricing policy, which tries to maximize the retailer revenue while avoiding information leakage of individual customers information and purchasing decisions. To this end, we first introduce a notion of emph{anticipating} $(varepsilon, delta)$-differential privacy that is tailored to dynamic pricing problem. Our policy achieves both the privacy guarantee and the performance guarantee in terms of regret. Roughly speaking, for $d$-dimensional personalized information, our algorithm achieves the expected regret at the order of $tilde{O}(varepsilon^{-1} sqrt{d^3 T})$, when the customers information is adversarially chosen. For stochastic personalized information, the regret bound can be further improved to $tilde{O}(sqrt{d^2T} + varepsilon^{-2} d^2)$
Demand response (DR) is not only a crucial solution to the demand side management but also a vital means of electricity market in maintaining power grid reliability, sustainability and stability. DR can enable consumers (e.g. data centers) to reduce their electricity consumption when the supply of electricity is a shortage. The consumers will be rewarded in the case of DR if they reduce or shift some of their energy usage during peak hours. Aiming at solving the efficiency of DR, in this paper, we present MEDR, a mechanism on emergency DR in colocation data center. First, we formalize the MEDR problem and propose a dynamic programming to solve the optimization version of the problem. We then design a deterministic mechanism as a solution to solve the MEDR problem. We show that our proposed mechanism is truthful. Next, we prove that our mechanism is an FPTAS, i.e., it can be approximated within $1 + epsilon$ for any given $epsilon > 0$, while the running time of our mechanism is polynomial in $n$ and $1/epsilon$, where $n$ is the number of tenants in the datacenter. Furthermore, we also give an auction system covering the efficient FPTAS algorithm as bidding decision program for DR in colocation datacenter. Finally, we choose a practical smart grid dataset to build a large number of datasets for simulation in performance evaluation. By evaluating metrics of the approximation ratio of our mechanism, the non-negative utility of tenants and social cost of colocation datacenter, the results demonstrate the effectiveness of our work.
We propose three different data-driven approaches for pricing European-style call options using supervised machine-learning algorithms. These approaches yield models that give a range of fair prices instead of a single price point. The performance of the models are tested on two stock market indices: NIFTY$50$ and BANKNIFTY from the Indian equity market. Although neither historical nor implied volatility is used as an input, the results show that the trained models have been able to capture the option pricing mechanism better than or similar to the Black-Scholes formula for all the experiments. Our choice of scale free I/O allows us to train models using combined data of multiple different assets from a financial market. This not only allows the models to achieve far better generalization and predictive capability, but also solves the problem of paucity of data, the primary limitation of using machine learning techniques. We also illustrate the performance of the trained models in the period leading up to the 2020 Stock Market Crash (Jan 2019 to April 2020).