ترغب بنشر مسار تعليمي؟ اضغط هنا

Evaluating probabilistic forecasts with scoringRules

138   0   0.0 ( 0 )
 نشر من قبل Sebastian Lerch
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Probabilistic forecasts in the form of probability distributions over future events have become popular in several fields including meteorology, hydrology, economics, and demography. In typical applications, many alternative statistical models and data sources can be used to produce probabilistic forecasts. Hence, evaluating and selecting among competing methods is an important task. The scoringRules package for R provides functionality for comparative evaluation of probabilistic models based on proper scoring rules, covering a wide range of situations in applied work. This paper discusses implementation and usage details, presents case studies from meteorology and economics, and points to the relevant background literature.



قيم البحث

اقرأ أيضاً

Time series forecasting is an active research topic in academia as well as industry. Although we see an increasing amount of adoptions of machine learning methods in solving some of those forecasting challenges, statistical methods remain powerful wh ile dealing with low granularity data. This paper introduces a refined Bayesian exponential smoothing model with the help of probabilistic programming languages including Stan. Our model refinements include additional global trend, transformation for multiplicative form, noise distribution and choice of priors. A benchmark study is conducted on a rich set of time-series data sets for our models along with other well-known time series models.
This paper is due to appear as a chapter of the forthcoming Handbook of Approximate Bayesian Computation (ABC) by S. Sisson, L. Fan, and M. Beaumont. We describe the challenge of calibrating climate simulators, and discuss the differences in emphasis in climate science compared to many of the more traditional ABC application areas. The primary difficulty is how to do inference with a computationally expensive simulator which we can only afford to run a small number of times, and we describe how Gaussian process emulators are used as surrogate models in this case. We introduce the idea of history matching, which is a non-probabilistic calibration method, which divides the parameter space into (not im)plausible and implausible regions. History matching can be shown to be a special case of ABC, but with a greater emphasis on defining realistic simulator discrepancy bounds, and using these to define tolerances and metrics. We describe a design approach for choosing parameter values at which to run the simulator, and illustrate the approach on a toy climate model, showing that with careful design we can find the plausible region with a very small number of model evaluations. Finally, we describe how calibrated GENIE-1 (an earth system model of intermediate complexity) predictions have been used, and why it is important to accurately characterise parametric uncertainty.
The scientific understanding of real-world processes has dramatically improved over the years through computer simulations. Such simulators represent complex mathematical models that are implemented as computer codes which are often expensive. The va lidity of using a particular simulator to draw accurate conclusions relies on the assumption that the computer code is correctly calibrated. This calibration procedure is often pursued under extensive experimentation and comparison with data from a real-world process. The problem is that the data collection may be so expensive that only a handful of experiments are feasible. History matching is a calibration technique that, given a simulator, it iteratively discards regions of the input space using an implausibility measure. When the simulator is computationally expensive, an emulator is used to explore the input space. In this paper, a Gaussian process provides a complete probabilistic output that is incorporated into the implausibility measure. The identification of regions of interest is accomplished with recently developed annealing sampling techniques. Active learning functions are incorporated into the history matching procedure to refocus on the input space and improve the emulator. The efficiency of the proposed framework is tested in well-known examples from the history matching literature, as well as in a proposed testbed of functions of higher dimensions.
240 - Han Fu 2021
Health economic evaluations often require predictions of survival rates beyond the follow-up period. Parametric survival models can be more convenient for economic modelling than the Cox model. The generalized gamma (GG) and generalized F (GF) distri butions are extensive families that contain almost all commonly used distributions with various hazard shapes and arbitrary complexity. In this study, we present a new SAS macro for implementing a wide variety of flexible parametric models including the GG and GF distributions and their special cases, as well as the Gompertz distribution. Proper custom distributions are also supported. Different from existing SAS procedures, this macro not only supports regression on the location parameter but also on ancillary parameters, which greatly increases model flexibility. In addition, the SAS macro supports weighted regression, stratified regression and robust inference. This study demonstrates with several examples how the SAS macro can be used for flexible survival modeling and extrapolation.
The leverage effect-- the correlation between an assets return and its volatility-- has played a key role in forecasting and understanding volatility and risk. While it is a long standing consensus that leverage effects exist and improve forecasts, e mpirical evidence paradoxically do not show that most individual stocks exhibit this phenomena, mischaracterizing risk and therefore leading to poor predictive performance. We examine this paradox, with the goal to improve density forecasts, by relaxing the assumption of linearity in the leverage effect. Nonlinear generalizations of the leverage effect are proposed within the Bayesian stochastic volatility framework in order to capture flexible leverage structures, where small fluctuations in prices have a different effect from large shocks. Efficient Bayesian sequential computation is developed and implemented to estimate this effect in a practical, on-line manner. Examining 615 stocks that comprise the S&P500 and Nikkei 225, we find that relaxing the linear assumption to our proposed nonlinear leverage effect function improves predictive performances for 89% of all stocks compared to the conventional model assumption.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا