ترغب بنشر مسار تعليمي؟ اضغط هنا

Prediction error quantification through probabilistic scaling -- EXTENDED VERSION

80   0   0.0 ( 0 )
 نشر من قبل Martina Mammarella Dr.
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we address the probabilistic error quantification of a general class of prediction methods. We consider a given prediction model and show how to obtain, through a sample-based approach, a probabilistic upper bound on the absolute value of the prediction error. The proposed scheme is based on a probabilistic scaling methodology in which the number of required randomized samples is independent of the complexity of the prediction model. The methodology is extended to address the case in which the probabilistic uncertain quantification is required to be valid for every member of a finite family of predictors. We illustrate the results of the paper by means of a numerical example.

قيم البحث

اقرأ أيضاً

In this paper, a sample-based procedure for obtaining simple and computable approximations of chance-constrained sets is proposed. The procedure allows to control the complexity of the approximating set, by defining families of simple-approximating s ets of given complexity. A probabilistic scaling procedure then allows to rescale these sets to obtain the desired probabilistic guarantees. The proposed approach is shown to be applicable in several problem in systems and control, such as the design of Stochastic Model Predictive Control schemes or the solution of probabilistic set membership estimation problems.
Estimation of the prediction error of a linear estimation rule is difficult if the data analyst also use data to select a set of variables and construct the estimation rule using only the selected variables. In this work, we propose an asymptotically unbiased estimator for the prediction error after model search. Under some additional mild assumptions, we show that our estimator converges to the true prediction error in $L^2$ at the rate of $O(n^{-1/2})$, with $n$ being the number of data points. Our estimator applies to general selection procedures, not requiring analytical forms for the selection. The number of variables to select from can grow as an exponential factor of $n$, allowing applications in high-dimensional data. It also allows model misspecifications, not requiring linear underlying models. One application of our method is that it provides an estimator for the degrees of freedom for many discontinuous estimation rules like best subset selection or relaxed Lasso. Connection to Steins Unbiased Risk Estimator is discussed. We consider in-sample prediction errors in this work, with some extension to out-of-sample errors in low dimensional, linear models. Examples such as best subset selection and relaxed Lasso are considered in simulations, where our estimator outperforms both $C_p$ and cross validation in various settings.
We consider multi-agent systems where agents actions and beliefs are determined aleatorically, or by the throw of dice. This system consists of possible worlds that assign distributions to independent random variables, and agents who assign probabili ties to these possible worlds. We present a novel syntax and semantics for such system, and show that they generalise Modal Logic. We also give a sound and complete calculus for reasoning in the base semantics, and a sound calculus for the full modal semantics, that we conjecture to be complete. Finally we discuss some application to reasoning about game playing agents.
85 - Thomas Ehrhard 2020
In probabilistic coherence spaces, a denotational model of probabilistic functional languages, morphisms are analytic and therefore smooth. We explore two related applications of the corresponding derivatives. First we show how derivatives allow to c ompute the expectation of execution time in the weak head reduction of probabilistic PCF (pPCF). Next we apply a general notion of local differential of morphisms to the proof of a Lipschitz property of these morphisms allowing in turn to relate the observational distance on pPCF terms to a distance the model is naturally equipped with. This suggests that extending probabilistic programming languages with derivatives, in the spirit of the differential lambda-calculus, could be quite meaningful.
Predict a new response from a covariate is a challenging task in regression, which raises new question since the era of high-dimensional data. In this paper, we are interested in the inverse regression method from a theoretical viewpoint. Theoretical results have already been derived for the well-known linear model, but recently, the curse of dimensionality has increased the interest of practitioners and theoreticians into generalization of those results for various estimators, calibrated for the high-dimension context. To deal with high-dimensional data, inverse regression is used in this paper. It is known to be a reliable and efficient approach when the number of features exceeds the number of observations. Indeed, under some conditions, dealing with the inverse regression problem associated to a forward regression problem drastically reduces the number of parameters to estimate and make the problem tractable. When both the responses and the covariates are multivariate, estimators constructed by the inverse regression are studied in this paper, the main result being explicit asymptotic prediction regions for the response. The performances of the proposed estimators and prediction regions are also analyzed through a simulation study and compared with usual estimators.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا