ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributional Representation of Longitudinal Data: Visualization, Regression and Prediction

266   0   0.0 ( 0 )
 نشر من قبل \\'Alvaro Eduardo Gajardo Cataldo
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop a representation of Gaussian distributed sparsely sampled longitudinal data whereby the data for each subject are mapped to a multivariate Gaussian distribution; this map is entirely data-driven. The proposed method utilizes functional principal component analysis and is nonparametric, assuming no prior knowledge of the covariance or mean structure of the longitudinal data. This approach naturally connects with a deeper investigation of the behavior of the functional principal component scores obtained for longitudinal data, as the number of observations per subject increases from sparse to dense. We show how this is reflected in the shrinkage of the distribution of the conditional scores given noisy longitudinal observations towards a point mass located at the true but unobservable FPCs. Mapping each subjects sparse observations to the corresponding conditional score distribution leads to useful visualizations and representations of sparse longitudinal data. Asymptotic rates of convergence as sample size increases are obtained for the 2-Wasserstein metric between the true and estimated conditional score distributions, both for a $K$-truncated functional principal component representation as well as for the case when $K=K(n)$ diverges with sample size $ntoinfty$. We apply these ideas to construct predictive distributions aimed at predicting outcomes given sparse longitudinal data.



قيم البحث

اقرأ أيضاً

In ordinary quantile regression, quantiles of different order are estimated one at a time. An alternative approach, which is referred to as quantile regression coefficients modeling (QRCM), is to model quantile regression coefficients as parametric f unctions of the order of the quantile. In this paper, we describe how the QRCM paradigm can be applied to longitudinal data. We introduce a two-level quantile function, in which two different quantile regression models are used to describe the (conditional) distribution of the within-subject response and that of the individual effects. We propose a novel type of penalized fixed-effects estimator, and discuss its advantages over standard methods based on $ell_1$ and $ell_2$ penalization. We provide model identifiability conditions, derive asymptotic properties, describe goodness-of-fit measures and model selection criteria, present simulation results, and discuss an application. The proposed method has been implemented in the R package qrcm.
93 - Hao Ran , Yang Bai 2021
In many longitudinal studies, the covariate and response are often intermittently observed at irregular, mismatched and subject-specific times. How to deal with such data when covariate and response are observed asynchronously is an often raised prob lem. Bayesian Additive Regression Trees(BART) is a Bayesian non-Parametric approach which has been shown to be competitive with the best modern predictive methods such as random forest and boosted decision trees. The sum of trees structure combined with a Bayesian inferential framework provide a accurate and robust statistic method. BART variant soft Bayesian Additive Regression Trees(SBART) constructed using randomized decision trees was developed and substantial theoretical and practical benefits were shown. In this paper, we propose a weighted SBART model solution for asynchronous longitudinal data. In comparison to other methods, the current methods are valid under with little assumptions on the covariate process. Extensive simulation studies provide numerical support for this solution. And data from an HIV study is used to illustrate our methodology
We propose a novel spike and slab prior specification with scaled beta prime marginals for the importance parameters of regression coefficients to allow for general effect selection within the class of structured additive distributional regression. T his enables us to model effects on all distributional parameters for arbitrary parametric distributions, and to consider various effect types such as non-linear or spatial effects as well as hierarchical regression structures. Our spike and slab prior relies on a parameter expansion that separates blocks of regression coefficients into overall scalar importance parameters and vectors of standardised coefficients. Hence, we can work with a scalar quantity for effect selection instead of a possibly high-dimensional effect vector, which yields improved shrinkage and sampling performance compared to the classical normal-inverse-gamma prior. We investigate the propriety of the posterior, show that the prior yields desirable shrinkage properties, propose a way of eliciting prior parameters and provide efficient Markov Chain Monte Carlo sampling. Using both simulated and three large-scale data sets, we show that our approach is applicable for data with a potentially large number of covariates, multilevel predictors accounting for hierarchically nested data and non-standard response distributions, such as bivariate normal or zero-inflated Poisson.
63 - S. Dias , P. Brito , P. Amaral 2020
We address classification of distributional data, where units are described by histogram or interval-valued variables. The proposed approach uses a linear discriminant function where distributions or intervals are represented by quantile functions, u nder specific assumptions. This discriminant function allows defining a score for each unit, in the form of a quantile function, which is used to classify the units in two a priori groups, using the Mallows distance. There is a diversity of application areas for the proposed linear discriminant method. In this work we classify the airline companies operating in NY airports based on air time and arrival/departure delays, using a full year fights.
Missing data are a common problem for both the construction and implementation of a prediction algorithm. Pattern mixture kernel submodels (PMKS) - a series of submodels for every missing data pattern that are fit using only data from that pattern - are a computationally efficient remedy for both stages. Here we show that PMKS yield the most predictive algorithm among all standard missing data strategies. Specifically, we show that the expected loss of a forecasting algorithm is minimized when each pattern-specific loss is minimized. Simulations and a re-analysis of the SUPPORT study confirms that PMKS generally outperforms zero-imputation, mean-imputation, complete-case analysis, complete-case submodels, and even multiple imputation (MI). The degree of improvement is highly dependent on the missingness mechanism and the effect size of missing predictors. When the data are Missing at Random (MAR) MI can yield comparable forecasting performance but generally requires a larger computational cost. We see that predictions from the PMKS are equivalent to the limiting predictions for a MI procedure that uses a mean model dependent on missingness indicators (the MIMI model). Consequently, the MIMI model can be used to assess the MAR assumption in practice. The focus of this paper is on out-of-sample prediction behavior, implications for model inference are only briefly explored.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا