ترغب بنشر مسار تعليمي؟ اضغط هنا

Flexible Bayesian Modeling of Counts: Constructing Penalized Complexity Priors

90   0   0.0 ( 0 )
 نشر من قبل Mahsa Nadifar
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English
 تأليف Mahsa Nadifar




اسأل ChatGPT حول البحث

Many of the data, particularly in medicine and disease mapping are count. Indeed, the under or overdispersion problem in count data distrusts the performance of the classical Poisson model. For taking into account this problem, in this paper, we introduce a new Bayesian structured additive regression model, called gamma count, with enough flexibility in modeling dispersion. Setting convenient prior distributions on the model parameters is a momentous issue in Bayesian statistics that characterize the nature of our uncertainty parameters. Relying on a recently proposed class of penalized complexity priors, motivated from a general set of construction principles, we derive the prior structure. The model can be formulated as a latent Gaussian model, and consequently, we can carry out the fast computation by using the integrated nested Laplace approximation method. We investigate the proposed methodology simulation study. Different expropriate prior distribution are examined to provide reasonable sensitivity analysis. To explain the applicability of the proposed model, we analyzed two real-world data sets related to the larynx mortality cancer in Germany and the handball champions league.

قيم البحث

اقرأ أيضاً

This paper develops Bayesian sample size formulae for experiments comparing two groups. We assume the experimental data will be analysed in the Bayesian framework, where pre-experimental information from multiple sources can be represented into robus t priors. In particular, such robust priors account for preliminary belief about the pairwise commensurability between parameters that underpin the historical and new experiments, to permit flexible borrowing of information. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances where the common variance in the new experiment is known or unknown. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our Bayesian sample size formulae in the setting of designing a clinical trial. Hypothetical data examples, motivated by a rare-disease trial with elicitation of expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.
Modeling correlation (and covariance) matrices can be challenging due to the positive-definiteness constraint and potential high-dimensionality. Our approach is to decompose the covariance matrix into the correlation and variance matrices and propose a novel Bayesian framework based on modeling the correlations as products of unit vectors. By specifying a wide range of distributions on a sphere (e.g. the squared-Dirichlet distribution), the proposed approach induces flexible prior distributions for covariance matrices (that go beyond the commonly used inverse-Wishart prior). For modeling real-life spatio-temporal processes with complex dependence structures, we extend our method to dynamic cases and introduce unit-vector Gaussian process priors in order to capture the evolution of correlation among components of a multivariate time series. To handle the intractability of the resulting posterior, we introduce the adaptive $Delta$-Spherical Hamiltonian Monte Carlo. We demonstrate the validity and flexibility of our proposed framework in a simulation study of periodic processes and an analysis of rats local field potential activity in a complex sequence memory task.
In employing spatial regression models for counts, we usually meet two issues. First, ignoring the inherent collinearity between covariates and the spatial effect would lead to causal inferences. Second, real count data usually reveal over or under-d ispersion where the classical Poisson model is not appropriate to use. We propose a flexible Bayesian hierarchical modeling approach by joining non-confounding spatial methodology and a newly reconsidered dispersed count modeling from the renewal theory to control the issues. Specifically, we extend the methodology for analyzing spatial count data based on the gamma distribution assumption for waiting times. The model can be formulated as a latent Gaussian model, and consequently, we can carry out the fast computation using the integrated nested Laplace approximation method. We also examine different popular approaches for handling spatial confounding and compare their performances in the presence of dispersion. We use the proposed methodology to analyze a clinical dataset related to stomach cancer incidence in Slovenia and perform a simulation study to understand the proposed approachs merits better.
127 - Ding Xiang , Galin L. Jones 2017
We consider penalized regression models under a unified framework where the particular method is determined by the form of the penalty term. We propose a fully Bayesian approach that incorporates both sparse and dense settings and show how to use a t ype of model averaging approach to eliminate the nuisance penalty parameters and perform inference through the marginal posterior distribution of the regression coefficients. We establish tail robustness of the resulting estimator as well as conditional and marginal posterior consistency. We develop an efficient component-wise Markov chain Monte Carlo algorithm for sampling. Numerical results show that the method tends to select the optimal penalty and performs well in both variable selection and prediction and is comparable to, and often better than alternative methods. Both simulated and real data examples are provided.
163 - Libo Sun , Chihoon Lee , 2013
We consider the problem of estimating parameters of stochastic differential equations (SDEs) with discrete-time observations that are either completely or partially observed. The transition density between two observations is generally unknown. We pr opose an importance sampling approach with an auxiliary parameter when the transition density is unknown. We embed the auxiliary importance sampler in a penalized maximum likelihood framework which produces more accurate and computationally efficient parameter estimates. Simulation studies in three different models illustrate promising improvements of the new penalized simulated maximum likelihood method. The new procedure is designed for the challenging case when some state variables are unobserved and moreover, observed states are sparse over time, which commonly arises in ecological studies. We apply this new approach to two epidemics of chronic wasting disease in mule deer.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا