No Arabic abstract
Accurate predictions of customers future lifetime value (LTV) given their attributes and past purchase behavior enables a more customer-centric marketing strategy. Marketers can segment customers into various buckets based on the predicted LTV and, in turn, customize marketing messages or advertising copies to serve customers in different segments better. Furthermore, LTV predictions can directly inform marketing budget allocations and improve real-time targeting and bidding of ad impressions. One challenge of LTV modeling is that some customers never come back, and the distribution of LTV can be heavy-tailed. The commonly used mean squared error (MSE) loss does not accommodate the significant fraction of zero value LTV from one-time purchasers and can be sensitive to extremely large LTVs from top spenders. In this article, we model the distribution of LTV given associated features as a mixture of zero point mass and lognormal distribution, which we refer to as the zero-inflated lognormal (ZILN) distribution. This modeling approach allows us to capture the churn probability and account for the heavy-tailedness nature of LTV at the same time. It also yields straightforward uncertainty quantification of the point prediction. The ZILN loss can be used in both linear models and deep neural networks (DNN). For model evaluation, we recommend the normalized Gini coefficient to quantify model discrimination and decile charts to assess model calibration. Empirically, we demonstrate the predictive performance of our proposed model on two real-world public datasets.
An alternative to current mainstream preprocessing methods is proposed: Value Selection (VS). Unlike the existing methods such as feature selection that removes features and instance selection that eliminates instances, value selection eliminates the values (with respect to each feature) in the dataset with two purposes: reducing the model size and preserving its accuracy. Two probabilistic methods based on information theorys metric are proposed: PVS and P + VS. Extensive experiments on the benchmark datasets with various sizes are elaborated. Those results are compared with the existing preprocessing methods such as feature selection, feature transformation, and instance selection methods. Experiment results show that value selection can achieve the balance between accuracy and model size reduction.
Since two people came down a county of north Seattle with positive COVID-19 (coronavirus-19) in 2019, the current total cases in the United States (U.S.) are over 12 million. Predicting the pandemic trend under effective variables is crucial to help find a way to control the epidemic. Based on available literature, we propose a validated Vector Autoregression (VAR) time series model to predict the positive COVID-19 cases. A real data prediction for U.S. is provided based on the U.S. coronavirus data. The key message from our study is that the situation of the pandemic will getting worse if there is no effective control.
Stroke is a major cause of mortality and long--term disability in the world. Predictive outcome models in stroke are valuable for personalized treatment, rehabilitation planning and in controlled clinical trials. In this paper we design a new model to predict outcome in the short-term, the putative therapeutic window for several treatments. Our regression-based model has a parametric form that is designed to address many challenges common in medical datasets like highly correlated variables and class imbalance. Empirically our model outperforms the best--known previous models in predicting short--term outcomes and in inferring the most effective treatments that improve outcome.
Background: Predicted probabilities from a risk prediction model are inevitably uncertain. This uncertainty has mostly been studied from a statistical perspective. We apply Value of Information methodology to evaluate the decision-theoretic implications of prediction uncertainty. Methods: Adopting a Bayesian perspective, we extend the definition of the Expected Value of Perfect Information (EVPI) from decision analysis to net benefit calculations in risk prediction. EVPI is the expected gain in net benefit by using the correct predictions as opposed to predictions from a proposed model. We suggest bootstrap methods for sampling from the posterior distribution of predictions for EVPI calculation using Monte Carlo simulations. In a case study, we used subsets of data of various sizes from a clinical trial for predicting mortality after myocardial infarction to show how EVPI can be interpreted and how it changes with sample size. Results: With a sample size of 1,000, EVPI was 0 at threshold values larger than 0.6, indicating there is no point in procuring more development data for such thresholds. At thresholds of 0.4-0.6, the proposed model was not net beneficial, but EVPI was positive, indicating that obtaining more development data might be justified. Across all thresholds, the gain in net benefit by using the correct model was 24% higher than the gain by using the proposed model. EVPI declined with larger samples and was generally low with sample sizes of 4,000 or greater. We summarize an algorithm for incorporating EVPI calculations into the commonly used bootstrap method for optimism correction. Conclusion: Value of Information methods can be applied to explore decision-theoretic consequences of uncertainty in risk prediction, and can complement inferential methods when developing or validating risk prediction models.
Disease surveillance is essential not only for the prior detection of outbreaks but also for monitoring trends of the disease in the long run. In this paper, we aim to build a tactical model for the surveillance of dengue, in particular. Most existing models for dengue prediction exploit its known relationships between climate and socio-demographic factors with the incidence counts, however they are not flexible enough to capture the steep and sudden rise and fall of the incidence counts. This has been the motivation for the methodology used in our paper. We build a non-parametric, flexible, Gaussian Process (GP) regression model that relies on past dengue incidence counts and climate covariates, and show that the GP model performs accurately, in comparison with the other existing methodologies, thus proving to be a good tactical and robust model for health authorities to plan their course of action.