No Arabic abstract
Heavy-tailed metrics are common and often critical to product evaluation in the online world. While we may have samples large enough for Central Limit Theorem to kick in, experimentation is challenging due to the wide confidence interval of estimation. We demonstrate the pressure by running A/A simulations with customer spending data from a large-scale Ecommerce site. Solutions are then explored. On one front we address the heavy tail directly and highlight the often ignored nuances of winsorization. In particular, the legitimacy of false positive rate could be at risk. We are further inspired by the idea of robust statistics and introduce Huber regression as a better way to measure treatment effect. On another front covariates from pre-experiment period are exploited. Although they are independent to assignment and potentially explain the variation of response well, concerns are that models are learned against prediction error rather than the bias of parameter. We find the framework of orthogonal learning useful, matching not raw observations but residuals from two predictions, one towards the response and the other towards the assignment. Robust regression is readily integrated, together with cross-fitting. The final design is proven highly effective in driving down variance at the same time controlling bias. It is empowering our daily practice and hopefully can also benefit other applications in the industry.
The autoregressive (AR) model is a widely used model to understand time series data. Traditionally, the innovation noise of the AR is modeled as Gaussian. However, many time series applications, for example, financial time series data, are non-Gaussian, therefore, the AR model with more general heavy-tailed innovations is preferred. Another issue that frequently occurs in time series is missing values, due to system data record failure or unexpected data loss. Although there are numerous works about Gaussian AR time series with missing values, as far as we know, there does not exist any work addressing the issue of missing data for the heavy-tailed AR model. In this paper, we consider this issue for the first time, and propose an efficient framework for parameter estimation from incomplete heavy-tailed time series based on a stochastic approximation expectation maximization (SAEM) coupled with a Markov Chain Monte Carlo (MCMC) procedure. The proposed algorithm is computationally cheap and easy to implement. The convergence of the proposed algorithm to a stationary point of the observed data likelihood is rigorously proved. Extensive simulations and real datasets analyses demonstrate the efficacy of the proposed framework.
It is important to estimate the local average treatment effect (LATE) when compliance with a treatment assignment is incomplete. The previously proposed methods for LATE estimation required all relevant variables to be jointly observed in a single dataset; however, it is sometimes difficult or even impossible to collect such data in many real-world problems for technical or privacy reasons. We consider a novel problem setting in which LATE, as a function of covariates, is nonparametrically identified from the combination of separately observed datasets. For estimation, we show that the direct least squares method, which was originally developed for estimating the average treatment effect under complete compliance, is applicable to our setting. However, model selection and hyperparameter tuning for the direct least squares estimator can be unstable in practice since it is defined as a solution to the minimax problem. We then propose a weighted least squares estimator that enables simpler model selection by avoiding the minimax objective formulation. Unlike the inverse probability weighted (IPW) estimator, the proposed estimator directly uses the pre-estimated weight without inversion, avoiding the problems caused by the IPW methods. We demonstrate the effectiveness of our method through experiments using synthetic and real-world datasets.
The research described herewith is to re-visit the classical doubly robust estimation of average treatment effect by conducting a systematic study on the comparisons, in the sense of asymptotic efficiency, among all possible combinations of the estimated propensity score and outcome regression. To this end, we consider all nine combinations under, respectively, parametric, nonparametric and semiparametric structures. The comparisons provide useful information on when and how to efficiently utilize the model structures in practice. Further, when there is model-misspecification, either propensity score or outcome regression, we also give the corresponding comparisons. Three phenomena are observed. Firstly, when all models are correctly specified, any combination can achieve the same semiparametric efficiency bound, which coincides with the existing results of some combinations. Secondly, when the propensity score is correctly modeled and estimated, but the outcome regression is misspecified parametrically or semiparametrically, the asymptotic variance is always larger than or equal to the semiparametric efficiency bound. Thirdly, in contrast, when the propensity score is misspecified parametrically or semiparametrically, while the outcome regression is correctly modeled and estimated, the asymptotic variance is not necessarily larger than the semiparametric efficiency bound. In some cases, the super-efficiency phenomenon occurs. We also conduct a small numerical study.
The intercity freight trips of heavy trucks are important data for transportation system planning and urban agglomeration management. In recent decades, the extraction of freight trips from GPS data has gradually become the main alternative to traditional surveys. Identifying the trip ends (origin and destination, OD) is the first task in trip extraction. In previous trip end identification methods, some key parameters, such as speed and time thresholds, have mostly been defined on the basis of empirical knowledge, which inevitably lacks universality. Here, we propose a data-driven trip end identification method. First, we define a speed threshold by analyzing the speed distribution of heavy trucks and identify all truck stops from raw GPS data. Second, we define minimum and maximum time thresholds by analyzing the distribution of the dwell times of heavy trucks at stop location and classify truck stops into three types based on these time thresholds. Third, we use highway network GIS data and freight-related points-of-interest (POIs) data to identify valid trip ends from among the three types of truck stops. In this step, we detect POI boundaries to determine whether a heavy truck is stopping at a freight-related location. We further analyze the spatiotemporal characteristics of intercity freight trips of heavy trucks and discuss their potential applications in practice.
The field of precision medicine aims to tailor treatment based on patient-specific factors in a reproducible way. To this end, estimating an optimal individualized treatment regime (ITR) that recommends treatment decisions based on patient characteristics to maximize the mean of a pre-specified outcome is of particular interest. Several methods have been proposed for estimating an optimal ITR from clinical trial data in the parallel group setting where each subject is randomized to a single intervention. However, little work has been done in the area of estimating the optimal ITR from crossover study designs. Such designs naturally lend themselves to precision medicine, because they allow for observing the response to multiple treatments for each patient. In this paper, we introduce a method for estimating the optimal ITR using data from a 2x2 crossover study with or without carryover effects. The proposed method is similar to policy search methods such as outcome weighted learning; however, we take advantage of the crossover design by using the difference in responses under each treatment as the observed reward. We establish Fisher and global consistency, present numerical experiments, and analyze data from a feeding trial to demonstrate the improved performance of the proposed method compared to standard methods for a parallel study design.