No Arabic abstract
Online platforms collect rich information about participants and then share some of this information back with them to improve market outcomes. In this paper we study the following information disclosure problem in two-sided markets: If a platform wants to maximize revenue, which sellers should the platform allow to participate, and how much of its available information about participating sellers quality should the platform share with buyers? We study this information disclosure problem in the context of two distinct two-sided market models: one in which the platform chooses prices and the sellers choose quantities (similar to ride-sharing), and one in which the sellers choose prices (similar to e-commerce). Our main results provide conditions under which simple information structures commonly observed in practice, such as banning certain sellers from the platform while not distinguishing between participating sellers, maximize the platforms revenue. An important innovation in our analysis is to transform the platforms information disclosure problem into a constrained price discrimination problem. We leverage this transformation to obtain our structural results.
Product cost heterogeneity across firms and loyalty models of customers are two topics that have garnered limited attention in prior studies on competitive price discrimination. Costs are generally assumed negligible or equal for all firms, and loyalty is modeled as an additive bias in customer valuations. We extend these previous treatments by considering cost asymmetry and a richer class of loyalty models in a game-theoretic model involving two asymmetric firms. Here firms may incur different non-negligible product costs, and customers can have firm-specific loyalty levels. We characterize the effects of loyalty levels and product cost difference on market outcomes such as prices, market share and profits. Our analysis and numerical simulations shed new light on market equilibrium structures arising from the interplay between product cost difference and loyalty levels.
Motivated by the emergence of popular service-based two-sided markets where sellers can serve multiple buyers at the same time, we formulate and study the {em two-sided cost sharing} problem. In two-sided cost sharing, sellers incur different costs for serving different subsets of buyers and buyers have different values for being served by different sellers. Both buyers and sellers are self-interested agents whose values and costs are private information. We study the problem from the perspective of an intermediary platform that matches buyers to sellers and assigns prices and wages in an effort to maximize welfare (i.e., buyer values minus seller costs) subject to budget-balance in an incentive compatible manner. In our markets of interest, agents trade the (often same) services multiple times. Moreover, the value and cost for the same service differs based on the context (e.g., location, urgency, weather conditions, etc). In this framework, we design mechanisms that are efficient, ex-ante budget-balanced, ex-ante individually rational, dominant strategy incentive compatible, and ex-ante in the core (a natural generalization of the core that we define here).
We analyze statistical discrimination in hiring markets using a multi-armed bandit model. Myopic firms face workers arriving with heterogeneous observable characteristics. The association between the workers skill and characteristics is unknown ex ante; thus, firms need to learn it. Laissez-faire causes perpetual underestimation: minority workers are rarely hired, and therefore, underestimation towards them tends to persist. Even a slight population-ratio imbalance frequently produces perpetual underestimation. We propose two policy solutions: a novel subsidy rule (the hybrid mechanism) and the Rooney Rule. Our results indicate that temporary affirmative actions effectively mitigate discrimination caused by insufficient data.
Two-sided matching platforms provide users with menus of match recommendations. To maximize the number of realized matches between the two sides (referred here as customers and suppliers), the platform must balance the inherent tension between recommending customers more potential suppliers to match with and avoiding potential collisions. We introduce a stylized model to study the above trade-off. The platform offers each customer a menu of suppliers, and customers choose, simultaneously and independently, either a supplier from their menu or to remain unmatched. Suppliers then see the set of customers that have selected them, and choose to either match with one of these customers or to remain unmatched. A match occurs if a customer and a supplier choose each other (in sequence). Agents choices are probabilistic, and proportional to public scores of agents in their menu and a score that is associated with remaining unmatched. The platforms problem is to construct menus for costumers to maximize the number of matches. This problem is shown to be strongly NP-hard via a reduction from 3-partition. We provide an efficient algorithm that achieves a constant-factor approximation to the expected number of matches.
The root-cause diagnostics of product quality defects in multistage manufacturing processes often requires a joint identification of crucial stages and process variables. To meet this requirement, this paper proposes a novel penalized matrix regression methodology for two-dimensional variable selection. The method regresses a scalar response variable against a matrix-based predictor using a generalized linear model. The unknown regression coefficient matrix is decomposed as a product of two factor matrices. The rows of the first factor matrix and the columns of the second factor matrix are simultaneously penalized to inspire sparsity. To estimate the parameters, we develop a block coordinate proximal descent (BCPD) optimization algorithm, which cyclically solves two convex sub-optimization problems. We have proved that the BCPD algorithm always converges to a critical point with any initialization. In addition, we have also proved that each of the sub-optimization problems has a closed-form solution if the response variable follows a distribution whose (negative) log-likelihood function has a Lipschitz continuous gradient. A simulation study and a dataset from a real-world application are used to validate the effectiveness of the proposed method.