No Arabic abstract
Timing decisions are common: when to file your taxes, finish a referee report, or complete a task at work. We ask whether time preferences can be inferred when textsl{only} task completion is observed. To answer this question, we analyze the following model: each period a decision maker faces the choice whether to complete the task today or to postpone it to later. Cost and benefits of task completion cannot be directly observed by the analyst, but the analyst knows that net benefits are drawn independently between periods from a time-invariant distribution and that the agent has time-separable utility. Furthermore, we suppose the analyst can observe the agents exact stopping probability. We establish that for any agent with quasi-hyperbolic $beta,delta$-preferences and given level of partial naivete $hat{beta}$, the probability of completing the task conditional on not having done it earlier increases towards the deadline. And conversely, for any given preference parameters $beta,delta$ and (weakly increasing) profile of task completion probability, there exists a stationary payoff distribution that rationalizes her behavior as long as the agent is either sophisticated or fully naive. An immediate corollary being that, without parametric assumptions, it is impossible to rule out time-consistency even when imposing an a priori assumption on the permissible long-run discount factor. We also provide an exact partial identification result when the analyst can, in addition to the stopping probability, observe the agents continuation value.
We propose a model of labor market sector self-selection that combines comparative advantage, as in the Roy model, and sector composition preference. Two groups choose between two sectors based on heterogeneous potential incomes and group compositions in each sector. Potential incomes incorporate group specific human capital accumulation and wage discrimination. Composition preferences are interpreted as reflecting group specific amenity preferences as well as homophily and aversion to minority status. We show that occupational segregation is amplified by the composition preferences and we highlight a resulting tension between redistribution and diversity. The model also exhibits tipping from extreme compositions to more balanced ones. Tipping occurs when a small nudge, associated with affirmative action, pushes the system to a very different equilibrium, and when the set of equilibria changes abruptly when a parameter governing the relative importance of pecuniary and composition preferences crosses a threshold.
Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight present costs by a bias factor. This model explains many time-inconsistent behaviors, but can make stark predictions in many settings: individuals either follow the most efficient plan for reaching their goal or procrastinate indefinitely. We propose a modification in which the present-bias parameter can vary over time, drawn independently each step from a fixed distribution. Following Kleinberg and Oren (2014), we use a weighted task graph to model task planning, and measure the cost of procrastination as the relative expected cost of the chosen path versus the optimal path. We use a novel connection to optimal pricing theory to describe the structure of the worst-case task graph for any present-bias distribution. We then leverage this structure to derive conditions on the bias distribution under which the worst-case ratio is exponential (in time) or constant. We also examine conditions on the task graph that lead to improved procrastination ratios: graphs with a uniformly bounded distance to the goal, and graphs in which the distance to the goal monotonically decreases on any path.
As endpoints of the hierarchical mass-assembly process, the stellar populations of local early-type galaxies encode the assembly history of galaxies over cosmic time. We use Horizon-AGN, a cosmological hydrodynamical simulation, to study the merger histories of local early-type galaxies and track how the morphological mix of their progenitors evolves over time. We provide a framework for alleviating `progenitor bias -- the bias that occurs if one uses only early-type galaxies to study the progenitor population. Early-types attain their final morphology at relatively early epochs -- by $zsim1$, around 60 per cent of todays early-types have had their last significant merger. At all redshifts, the majority of mergers have one late-type progenitor, with late-late mergers dominating at $z>1.5$ and early-early mergers becoming significant only at $z<0.5$. Progenitor bias is severe at all but the lowest redshifts -- e.g. at $zsim0.6$, less than 50 per cent of the stellar mass in todays early-types is actually in progenitors with early-type morphology, while, at $zsim2$, studying only early-types misses almost all (80 per cent) of the stellar mass that eventually ends up in local early-type systems. At high redshift, almost all massive late-type galaxies, regardless of their local environment or star-formation rate, are progenitors of local early-type galaxies, as are lower-mass (M$_star$ $<$ 10$^{10.5}$ M$_{odot}$) late-types as long as they reside in high density environments. In this new era of large observational surveys (e.g. LSST, JWST), this study provides a framework for studying how todays early-type galaxies have been built up over cosmic time.
We develop an analytical framework to study experimental design in two-sided marketplaces. Many of these experiments exhibit interference, where an intervention applied to one market participant influences the behavior of another participant. This interference leads to biased estimates of the treatment effect of the intervention. We develop a stochastic market model and associated mean field limit to capture dynamics in such experiments, and use our model to investigate how the performance of different designs and estimators is affected by marketplace interference effects. Platforms typically use two common experimental designs: demand-side (customer) randomization (CR) and supply-side (listing) randomization (LR), along with their associated estimators. We show that good experimental design depends on market balance: in highly demand-constrained markets, CR is unbiased, while LR is biased; conversely, in highly supply-constrained markets, LR is unbiased, while CR is biased. We also introduce and study a novel experimental design based on two-sided randomization (TSR) where both customers and listings are randomized to treatment and control. We show that appropriate choices of TSR designs can be unbiased in both extremes of market balance, while yielding relatively low bias in intermediate regimes of market balance.
We present a novel subset scan method to detect if a probabilistic binary classifier has statistically significant bias -- over or under predicting the risk -- for some subgroup, and identify the characteristics of this subgroup. This form of model checking and goodness-of-fit test provides a way to interpretably detect the presence of classifier bias or regions of poor classifier fit. This allows consideration of not just subgroups of a priori interest or small dimensions, but the space of all possible subgroups of features. To address the difficulty of considering these exponentially many possible subgroups, we use subset scan and parametric bootstrap-based methods. Extending this method, we can penalize the complexity of the detected subgroup and also identify subgroups with high classification errors. We demonstrate these methods and find interesting results on the COMPAS crime recidivism and credit delinquency data.