ترغب بنشر مسار تعليمي؟ اضغط هنا

Model Selection for Mixture Models - Perspectives and Strategies

97   0   0.0 ( 0 )
 نشر من قبل Christian P. Robert
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English
 تأليف Gilles Celeux -




اسأل ChatGPT حول البحث

Determining the number G of components in a finite mixture distribution is an important and difficult inference issue. This is a most important question, because statistical inference about the resulting model is highly sensitive to the value of G. Selecting an erroneous value of G may produce a poor density estimate. This is also a most difficult question from a theoretical perspective as it relates to unidentifiability issues of the mixture model. This is further a most relevant question from a practical viewpoint since the meaning of the number of components G is strongly related to the modelling purpose of a mixture distribution. We distinguish in this chapter between selecting G as a density estimation problem in Section 2 and selecting G in a model-based clustering framework in Section 3. Both sections discuss frequentist as well as Bayesian approaches. We present here some of the Bayesian solutions to the different interpretations of picking the right number of components in a mixture, before concluding on the ill-posed nature of the question.



قيم البحث

اقرأ أيضاً

In Chib (1995), a method for approximating marginal densities in a Bayesian setting is proposed, with one proeminent application being the estimation of the number of components in a normal mixture. As pointed out in Neal (1999) and Fruhwirth-Schnatt er (2004), the approximation often fails short of providing a proper approximation to the true marginal densities because of the well-known label switching problem (Celeux et al., 2000). While there exist other alternatives to the derivation of approximate marginal densities, we reconsider the original proposal here and show as in Berkhof et al. (2003) and Lee et al. (2008) that it truly approximates the marginal densities once the label switching issue has been solved.
We develop a Bayesian variable selection method, called SVEN, based on a hierarchical Gaussian linear model with priors placed on the regression coefficients as well as on the model space. Sparsity is achieved by using degenerate spike priors on inac tive variables, whereas Gaussian slab priors are placed on the coefficients for the important predictors making the posterior probability of a model available in explicit form (up to a normalizing constant). The strong model selection consistency is shown to be attained when the number of predictors grows nearly exponentially with the sample size and even when the norm of mean effects solely due to the unimportant variables diverge, which is a novel attractive feature. An appealing byproduct of SVEN is the construction of novel model weight adjusted prediction intervals. Embedding a unique model based screening and using fast Cholesky updates, SVEN produces a highly scalable computational framework to explore gigantic model spaces, rapidly identify the regions of high posterior probabilities and make fast inference and prediction. A temperature schedule guided by our model selection consistency derivations is used to further mitigate multimodal posterior distributions. The performance of SVEN is demonstrated through a number of simulation experiments and a real data example from a genome wide association study with over half a million markers.
131 - Edouard Ollier 2021
Nonlinear Mixed effects models are hidden variables models that are widely used in many field such as pharmacometrics. In such models, the distribution characteristics of hidden variables can be specified by including several parameters such as covar iates or correlations which must be selected. Recent development of pharmacogenomics has brought averaged/high dimensional problems to the field of nonlinear mixed effects modeling for which standard covariates selection techniques like stepwise methods are not well suited. This work proposes to select covariates and correlation parameters using a penalized likelihood approach. The penalized likelihood problem is solved using a stochastic proximal gradient algorithm to avoid inner-outer iterations. Speed of convergence of the proximal gradient algorithm is improved by the use of component-wise adaptive gradient step sizes. The practical implementation and tuning of the proximal gradient algorithm is explored using simulations. Calibration of regularization parameters is performed by minimizing the Bayesian Information Criterion using particle swarm optimization, a zero order optimization procedure. The use of warm restart and parallelization allows to reduce significantly computing time. The performance of the proposed method compared to the traditional grid search strategy is explored using simulated data. Finally, an application to real data from two pharmacokinetics studies is provided, one studying an antifibrinolitic and the other studying an antibiotic.
The aim of this paper is to present a mixture composite regression model for claim severity modelling. Claim severity modelling poses several challenges such as multimodality, heavy-tailedness and systematic effects in data. We tackle this modelling problem by studying a mixture composite regression model for simultaneous modeling of attritional and large claims, and for considering systematic effects in both the mixture components as well as the mixing probabilities. For model fitting, we present a group-fused regularization approach that allows us for selecting the explanatory variables which significantly impact the mixing probabilities and the different mixture components, respectively. We develop an asymptotic theory for this regularized estimation approach, and fitting is performed using a novel Generalized Expectation-Maximization algorithm. We exemplify our approach on real motor insurance data set.
Handling missing values plays an important role in the analysis of survival data, especially, the ones marked by cure fraction. In this paper, we discuss the properties and implementation of stochastic approximations to the expectation-maximization ( EM) algorithm to obtain maximum likelihood (ML) type estimates in situations where missing data arise naturally due to right censoring and a proportion of individuals are immune to the event of interest. A flexible family of three parameter exponentiated-Weibull (EW) distributions is assumed to characterize lifetimes of the non-immune individuals as it accommodates both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub) hazard functions. To evaluate the performance of the SEM algorithm, an extensive simulation study is carried out under various parameter settings. Using likelihood ratio test we also carry out model discrimination within the EW family of distributions. Furthermore, we study the robustness of the SEM algorithm with respect to outliers and algorithm starting values. Few scenarios where stochastic EM (SEM) algorithm outperforms the well-studied EM algorithm are also examined in the given context. For further demonstration, a real survival data on cutaneous melanoma is analyzed using the proposed cure rate model with EW lifetime distribution and the proposed estimation technique. Through this data, we illustrate the applicability of the likelihood ratio test towards rejecting several well-known lifetime distributions that are nested within the wider class of EW distributions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا