ترغب بنشر مسار تعليمي؟ اضغط هنا

Adaptive Ensemble Learning of Spatiotemporal Processes with Calibrated Predictive Uncertainty: A Bayesian Nonparametric Approach

130   0   0.0 ( 0 )
 نشر من قبل Jeremiah Zhe Liu
 تاريخ النشر 2019
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Ensemble learning is a mainstay in modern data science practice. Conventional ensemble algorithms assign to base models a set of deterministic, constant model weights that (1) do not fully account for individual models varying accuracy across data subgroups, nor (2) provide uncertainty estimates for the ensemble prediction. These shortcomings can yield predictions that are precise but biased, which can negatively impact the performance of the algorithm in real-word applications. In this work, we present an adaptive, probabilistic approach to ensemble learning using a transformed Gaussian process as a prior for the ensemble weights. Given input features, our method optimally combines base models based on their predictive accuracy in the feature space, and provides interpretable estimates of the uncertainty associated with both model selection, as reflected by the ensemble weights, and the overall ensemble predictions. Furthermore, to ensure that this quantification of the model uncertainty is accurate, we propose additional machinery to non-parametrically model the ensembles predictive cumulative density function (CDF) so that it is consistent with the empirical distribution of the data. We apply the proposed method to data simulated from a nonlinear regression model, and to generate a spatial prediction model and associated prediction uncertainties for fine particle levels in eastern Massachusetts, USA.



قيم البحث

اقرأ أيضاً

291 - Yunbo Ouyang , Feng Liang 2017
A nonparametric Bayes approach is proposed for the problem of estimating a sparse sequence based on Gaussian random variables. We adopt the popular two-group prior with one component being a point mass at zero, and the other component being a mixture of Gaussian distributions. Although the Gaussian family has been shown to be suboptimal for this problem, we find that Gaussian mixtures, with a proper choice on the means and mixing weights, have the desired asymptotic behavior, e.g., the corresponding posterior concentrates on balls with the desired minimax rate. To achieve computation efficiency, we propose to obtain the posterior distribution using a deterministic variational algorithm. Empirical studies on several benchmark data sets demonstrate the superior performance of the proposed algorithm compared to other alternatives.
Ensemble learning is a mainstay in modern data science practice. Conventional ensemble algorithms assigns to base models a set of deterministic, constant model weights that (1) do not fully account for variations in base model accuracy across subgrou ps, nor (2) provide uncertainty estimates for the ensemble prediction, which could result in mis-calibrated (i.e. precise but biased) predictions that could in turn negatively impact the algorithm performance in real-word applications. In this work, we present an adaptive, probabilistic approach to ensemble learning using dependent tail-free process as ensemble weight prior. Given input feature $mathbf{x}$, our method optimally combines base models based on their predictive accuracy in the feature space $mathbf{x} in mathcal{X}$, and provides interpretable uncertainty estimates both in model selection and in ensemble prediction. To encourage scalable and calibrated inference, we derive a structured variational inference algorithm that jointly minimize KL objective and the models calibration score (i.e. Continuous Ranked Probability Score (CRPS)). We illustrate the utility of our method on both a synthetic nonlinear function regression task, and on the real-world application of spatio-temporal integration of particle pollution prediction models in New England.
While there is an increasing amount of literature about Bayesian time series analysis, only a few Bayesian nonparametric approaches to multivariate time series exist. Most methods rely on Whittles Likelihood, involving the second order structure of a stationary time series by means of its spectral density matrix. This is often modeled in terms of the Cholesky decomposition to ensure positive definiteness. However, asymptotic properties such as posterior consistency or posterior contraction rates are not known. A different idea is to model the spectral density matrix by means of random measures. This is in line with existing approaches for the univariate case, where the normalized spectral density is modeled similar to a probability density, e.g. with a Dirichlet process mixture of Beta densities. In this work, we present a related approach for multivariate time series, with matrix-valued mixture weights induced by a Hermitian positive definite Gamma process. The proposed procedure is shown to perform well for both simulated and real data. Posterior consistency and contraction rates are also established.
64 - Wei Jin , Yang Ni , Leah H. Rubin 2020
Although combination antiretroviral therapy (ART) is highly effective in suppressing viral load for people with HIV (PWH), many ART agents may exacerbate central nervous system (CNS)-related adverse effects including depression. Therefore, understand ing the effects of ART drugs on the CNS function, especially mental health, can help clinicians personalize medicine with less adverse effects for PWH and prevent them from discontinuing their ART to avoid undesirable health outcomes and increased likelihood of HIV transmission. The emergence of electronic health records offers researchers unprecedented access to HIV data including individuals mental health records, drug prescriptions, and clinical information over time. However, modeling such data is very challenging due to high-dimensionality of the drug combination space, the individual heterogeneity, and sparseness of the observed drug combinations. We develop a Bayesian nonparametric approach to learn drug combination effect on mental health in PWH adjusting for socio-demographic, behavioral, and clinical factors. The proposed method is built upon the subset-tree kernel method that represents drug combinations in a way that synthesizes known regimen structure into a single mathematical representation. It also utilizes a distance-dependent Chinese restaurant process to cluster heterogeneous population while taking into account individuals treatment histories. We evaluate the proposed approach through simulation studies, and apply the method to a dataset from the Womens Interagency HIV Study, yielding interpretable and promising results. Our method has clinical utility in guiding clinicians to prescribe more informed and effective personalized treatment based on individuals treatment histories and clinical characteristics.
Classification is the task of assigning a new instance to one of a set of predefined categories based on the attributes of the instance. A classification tree is one of the most commonly used techniques in the area of classification. In this paper, w e introduce a novel classification tree algorithm which we call Direct Nonparametric Predictive Inference (D-NPI) classification algorithm. The D-NPI algorithm is completely based on the Nonparametric Predictive Inference (NPI) approach, and it does not use any other assumption or information. The NPI is a statistical methodology which learns from data in the absence of prior knowledge and uses only few modelling assumptions, enabled by the use of lower and upper probabilities to quantify uncertainty. Due to the predictive nature of NPI, it is well suited for classification, as the nature of classification is explicitly predictive as well. The D-NPI algorithm uses a new split criterion called Correct Indication (CI). The CI is about the informativity that the attribute variables will indicate, hence, if the attribute is very informative, it gives high lower and upper probabilities for CI. The CI reports the strength of the evidence that the attribute variables will indicate, based on the data. The CI is completely based on the NPI, and it does not use any additional concepts such as entropy. The performance of the D-NPI classification algorithm is tested against several classification algorithms using classification accuracy, in-sample accuracy and tree size on different datasets from the UCI machine learning repository. The experimental results indicate that the D-NPI classification algorithm performs well in terms of classification accuracy and in-sample accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا