ترغب بنشر مسار تعليمي؟ اضغط هنا

Uncertainty and Value of Perfect Information in Risk Prediction Modeling

55   0   0.0 ( 0 )
 نشر من قبل Mohsen Sadatsafavi
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Background: Predicted probabilities from a risk prediction model are inevitably uncertain. This uncertainty has mostly been studied from a statistical perspective. We apply Value of Information methodology to evaluate the decision-theoretic implications of prediction uncertainty. Methods: Adopting a Bayesian perspective, we extend the definition of the Expected Value of Perfect Information (EVPI) from decision analysis to net benefit calculations in risk prediction. EVPI is the expected gain in net benefit by using the correct predictions as opposed to predictions from a proposed model. We suggest bootstrap methods for sampling from the posterior distribution of predictions for EVPI calculation using Monte Carlo simulations. In a case study, we used subsets of data of various sizes from a clinical trial for predicting mortality after myocardial infarction to show how EVPI can be interpreted and how it changes with sample size. Results: With a sample size of 1,000, EVPI was 0 at threshold values larger than 0.6, indicating there is no point in procuring more development data for such thresholds. At thresholds of 0.4-0.6, the proposed model was not net beneficial, but EVPI was positive, indicating that obtaining more development data might be justified. Across all thresholds, the gain in net benefit by using the correct model was 24% higher than the gain by using the proposed model. EVPI declined with larger samples and was generally low with sample sizes of 4,000 or greater. We summarize an algorithm for incorporating EVPI calculations into the commonly used bootstrap method for optimism correction. Conclusion: Value of Information methods can be applied to explore decision-theoretic consequences of uncertainty in risk prediction, and can complement inferential methods when developing or validating risk prediction models.

قيم البحث

اقرأ أيضاً

Several well-established benchmark predictors exist for Value-at-Risk (VaR), a major instrument for financial risk management. Hybrid methods combining AR-GARCH filtering with skewed-$t$ residuals and the extreme value theory-based approach are parti cularly recommended. This study introduces yet another VaR predictor, G-VaR, which follows a novel methodology. Inspired by the recent mathematical theory of sublinear expectation, G-VaR is built upon the concept of model uncertainty, which in the present case signifies that the inherent volatility of financial returns cannot be characterized by a single distribution but rather by infinitely many statistical distributions. By considering the worst scenario among these potential distributions, the G-VaR predictor is precisely identified. Extensive experiments on both the NASDAQ Composite Index and S&P500 Index demonstrate the excellent performance of the G-VaR predictor, which is superior to most existing benchmark VaR predictors.
This study presents a new risk-averse multi-stage stochastic epidemics-ventilator-logistics compartmental model to address the resource allocation challenges of mitigating COVID-19. This epidemiological logistics model involves the uncertainty of unt ested asymptomatic infections and incorporates short-term human migration. Disease transmission is also forecasted through a new formulation of transmission rates that evolve over space and time with respect to various non-pharmaceutical interventions, such as wearing masks, social distancing, and lockdown. The proposed multi-stage stochastic model overviews different scenarios on the number of asymptomatic individuals while optimizing the distribution of resources, such as ventilators, to minimize the total expected number of newly infected and deceased people. The Conditional Value at Risk (CVaR) is also incorporated into the multi-stage mean-risk model to allow for a trade-off between the weighted expected loss due to the outbreak and the expected risks associated with experiencing disastrous pandemic scenarios. We apply our multi-stage mean-risk epidemics-ventilator-logistics model to the case of controlling the COVID-19 in highly-impacted counties of New York and New Jersey. We calibrate, validate, and test our model using actual infection, population, and migration data. The results indicate that short-term migration influences the transmission of the disease significantly. The optimal number of ventilators allocated to each region depends on various factors, including the number of initial infections, disease transmission rates, initial ICU capacity, the population of a geographical location, and the availability of ventilator supply. Our data-driven modeling framework can be adapted to study the disease transmission dynamics and logistics of other similar epidemics and pandemics.
Suppose we have a Bayesian model which combines evidence from several different sources. We want to know which model parameters most affect the estimate or decision from the model, or which of the parameter uncertainties drive the decision uncertaint y. Furthermore we want to prioritise what further data should be collected. These questions can be addressed by Value of Information (VoI) analysis, in which we estimate expected reductions in loss from learning specific parameters or collecting data of a given design. We describe the theory and practice of VoI for Bayesian evidence synthesis, using and extending ideas from health economics, computer modelling and Bayesian design. The methods are general to a range of decision problems including point estimation and choices between discrete actions. We apply them to a model for estimating prevalence of HIV infection, combining indirect information from several surveys, registers and expert beliefs. This analysis shows which parameters contribute most of the uncertainty about each prevalence estimate, and provides the expected improvements in precision from collecting specific amounts of additional data.
In order to maintain consistent quality of service, computer network engineers face the task of monitoring the traffic fluctuations on the individual links making up the network. However, due to resource constraints and limited access, it is not poss ible to directly measure all the links. Starting with a physically interpretable probabilistic model of network-wide traffic, we demonstrate how an expensively obtained set of measurements may be used to develop a network-specific model of the traffic across the network. This model may then be used in conjunction with easily obtainable measurements to provide more accurate prediction than is possible with only the inexpensive measurements. We show that the model, once learned may be used for the same network for many different periods of traffic. Finally, we show an application of the prediction technique to create relevant control charts for detection and isolation of shifts in network traffic.
Accurate predictions of customers future lifetime value (LTV) given their attributes and past purchase behavior enables a more customer-centric marketing strategy. Marketers can segment customers into various buckets based on the predicted LTV and, i n turn, customize marketing messages or advertising copies to serve customers in different segments better. Furthermore, LTV predictions can directly inform marketing budget allocations and improve real-time targeting and bidding of ad impressions. One challenge of LTV modeling is that some customers never come back, and the distribution of LTV can be heavy-tailed. The commonly used mean squared error (MSE) loss does not accommodate the significant fraction of zero value LTV from one-time purchasers and can be sensitive to extremely large LTVs from top spenders. In this article, we model the distribution of LTV given associated features as a mixture of zero point mass and lognormal distribution, which we refer to as the zero-inflated lognormal (ZILN) distribution. This modeling approach allows us to capture the churn probability and account for the heavy-tailedness nature of LTV at the same time. It also yields straightforward uncertainty quantification of the point prediction. The ZILN loss can be used in both linear models and deep neural networks (DNN). For model evaluation, we recommend the normalized Gini coefficient to quantify model discrimination and decile charts to assess model calibration. Empirically, we demonstrate the predictive performance of our proposed model on two real-world public datasets.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا