This paper studies matching markets in the presence of middlemen. In our framework, a buyer-seller pair may either trade directly or use the services of a middleman; and a middleman may serve multiple buyer-seller pairs. Direct trade between a buyer
and a seller is costlier than a trade mediated by a middleman. For each such market, we examine an associated cooperative game with transferable utility. First, we show that an optimal matching for a matching market with middlemen can be obtained by considering the two-sided assignment market where each buyer-seller pair is allowed to use the mediation service of the middlemen free of charge and attain the maximum surplus. Second, we prove that the core of a matching market with middlemen is always non-empty. Third, we show the existence of a buyer-optimal core allocation and a seller-optimal core allocation. In general, the core does not exhibit a middleman-optimal matching. Finally, we establish the coincidence between the core and the set of competitive equilibrium payoff vectors.
When pre-processing observational data via matching, we seek to approximate each unit with maximally similar peers that had an alternative treatment status--essentially replicating a randomized block design. However, as one considers a growing number
of continuous features, a curse of dimensionality applies making asymptotically valid inference impossible (Abadie and Imbens, 2006). The alternative of ignoring plausibly relevant features is certainly no better, and the resulting trade-off substantially limits the application of matching methods to wide datasets. Instead, Li and Fu (2017) recasts the problem of matching in a metric learning framework that maps features to a low-dimensional space that facilitates closer matches while still capturing important aspects of unit-level heterogeneity. However, that method lacks key theoretical guarantees and can produce inconsistent estimates in cases of heterogeneous treatment effects. Motivated by straightforward extension of existing results in the matching literature, we present alternative techniques that learn latent matching features through either MLPs or through siamese neural networks trained on a carefully selected loss function. We benchmark the resulting alternative methods in simulations as well as against two experimental data sets--including the canonical NSW worker training program data set--and find superior performance of the neural-net-based methods.
We adress the maximization problem of expected utility from terminal wealth. The special feature of this paper is that we consider a financial market where the price process of risky assets can have a default time. Using dynamic programming, we chara
cterize the value function with a backward stochastic differential equation and the optimal portfolio policies. We separately treat the cases of exponential, power and logarithmic utility.
Empirical work often uses treatment assigned following geographic boundaries. When the effects of treatment cross over borders, classical difference-in-differences estimation produces biased estimates for the average treatment effect. In this paper,
I introduce a potential outcomes framework to model spillover effects and decompose the estimates bias in two parts: (1) the control group no longer identifies the counterfactual trend because their outcomes are affected by treatment and (2) changes in treated units outcomes reflect the effect of their own treatment status and the effect from the treatment status of close units. I propose estimation strategies that can remove both sources of bias and semi-parametrically estimate the spillover effects themselves. I extend Callaway and SantAnna (2020) to allow for event-study estimates that control for spillovers. To highlight the importance of spillover effects, I revisit analyses of three place-based interventions.
Given the unconfoundedness assumption, we propose new nonparametric estimators for the reduced dimensional conditional average treatment effect (CATE) function. In the first stage, the nuisance functions necessary for identifying CATE are estimated b
y machine learning methods, allowing the number of covariates to be comparable to or larger than the sample size. The second stage consists of a low-dimensional local linear regression, reducing CATE to a function of the covariate(s) of interest. We consider two variants of the estimator depending on whether the nuisance functions are estimated over the full sample or over a hold-out sample. Building on Belloni at al. (2017) and Chernozhukov et al. (2018), we derive functional limit theory for the estimators and provide an easy-to-implement procedure for uniform inference based on the multiplier bootstrap. The empirical application revisits the effect of maternal smoking on a babys birth weight as a function of the mothers age.