Do you want to publish a course? Click here

Proper local scoring rules

118   0   0.0 ( 0 )
 Added by Matthew Parry
 Publication date 2011
and research's language is English




Ask ChatGPT about the research

We investigate proper scoring rules for continuous distributions on the real line. It is known that the log score is the only such rule that depends on the quoted density only through its value at the outcome that materializes. Here we allow further dependence on a finite number $m$ of derivatives of the density at the outcome, and describe a large class of such $m$-local proper scoring rules: these exist for all even $m$ but no odd $m$. We further show that for $mgeq2$ all such $m$-local rules can be computed without knowledge of the normalizing constant of the distribution.



rate research

Read More

A scoring rule is a loss function measuring the quality of a quoted probability distribution $Q$ for a random variable $X$, in the light of the realized outcome $x$ of $X$; it is proper if the expected score, under any distribution $P$ for $X$, is minimized by quoting $Q=P$. Using the fact that any differentiable proper scoring rule on a finite sample space ${mathcal{X}}$ is the gradient of a concave homogeneous function, we consider when such a rule can be local in the sense of depending only on the probabilities quoted for points in a nominated neighborhood of $x$. Under mild conditions, we characterize such a proper local scoring rule in terms of a collection of homogeneous functions on the cliques of an undirected graph on the space ${mathcal{X}}$. A useful property of such rules is that the quoted distribution $Q$ need only be known up to a scale factor. Examples of the use of such scoring rules include Besags pseudo-likelihood and Hyv{a}rinens method of ratio matching.
Proper scoring rules are commonly applied to quantify the accuracy of distribution forecasts. Given an observation they assign a scalar score to each distribution forecast, with the the lowest expected score attributed to the true distribution. The energy and variogram scores are two rules that have recently gained some popularity in multivariate settings because their computation does not require a forecast to have parametric density function and so they are broadly applicable. Here we conduct a simulation study to compare the discrimination ability between the energy score and three variogram scores. Compared with other studies, our simulation design is more realistic because it is supported by a historical data set containing commodity prices, currencies and interest rates, and our data generating processes include a diverse selection of models with different marginal distributions, dependence structure, and calibration windows. This facilitates a comprehensive comparison of the performance of proper scoring rules in different settings. To compare the scores we use three metrics: the mean relative score, error rate and a generalised discrimination heuristic. Overall, we find that the variogram score with parameter p=0.5 outperforms the energy score and the other two variogram scores.
This paper forges a strong connection between two seemingly unrelated forecasting problems: incentive-compatible forecast elicitation and forecast aggregation. Proper scoring rules are the well-known solution to the former problem. To each such rule s we associate a corresponding method of aggregation, mapping expert forecasts and expert weights to a consensus forecast, which we call *quasi-arithmetic (QA) pooling* with respect to s. We justify this correspondence in several ways: - QA pooling with respect to the two most well-studied scoring rules (quadratic and logarithmic) corresponds to the two most well-studied forecast aggregation methods (linear and logarithmic). - Given a scoring rule s used for payment, a forecaster agent who sub-contracts several experts, paying them in proportion to their weights, is best off aggregating the experts reports using QA pooling with respect to s, meaning this strategy maximizes its worst-case profit (over the possible outcomes). - The score of an aggregator who uses QA pooling is concave in the experts weights. As a consequence, online gradient descent can be used to learn appropriate expert weights from repeated experiments with low regret. - The class of all QA pooling methods is characterized by a natural set of axioms (generalizing classical work by Kolmogorov on quasi-arithmetic means).
In this paper, we deal with the problem of calibrating thresholding rules in the setting of Poisson intensity estimation. By using sharp concentration inequalities, oracle inequalities are derived and we establish the optimality of our estimate up to a logarithmic term. This result is proved under mild assumptions and we do not impose any condition on the support of the signal to be estimated. Our procedure is based on data-driven thresholds. As usual, they depend on a threshold parameter $gamma$ whose optimal value is hard to estimate from the data. Our main concern is to provide some theoretical and numerical results to handle this issue. In particular, we establish the existence of a minimal threshold parameter from the theoretical point of view: taking $gamma<1$ deteriorates oracle performances of our procedure. In the same spirit, we establish the existence of a maximal threshold parameter and our theoretical results point out the optimal range $gammain[1,12]$. Then, we lead a numerical study that shows that choosing $gamma$ larger than 1 but close to 1 is a fairly good choice. Finally, we compare our procedure with classical ones revealing the harmful role of the support of functions when estimated by classical procedures.
This paper introduces an objective for optimizing proper scoring rules. The objective is to maximize the increase in payoff of a forecaster who exerts a binary level of effort to refine a posterior belief from a prior belief. In this framework we characterize optimal scoring rules in simple settings, give efficient algorithms for computing optimal scoring rules in complex settings, and identify simple scoring rules that are approximately optimal. In comparison, standard scoring rules in theory and practice -- for example the quadratic rule, scoring rules for the expectation, and scoring rules for multiple tasks that are averages of single-task scoring rules -- can be very far from optimal.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا