ﻻ يوجد ملخص باللغة العربية
The power prior is a popular tool for constructing informative prior distributions based on historical data. The method consists of raising the likelihood to a discounting factor in order to control the amount of information borrowed from the historical data. It is customary to perform a sensitivity analysis reporting results for a range of values of the discounting factor. However, one often wishes to assign it a prior distribution and estimate it jointly with the parameters, which in turn necessitates the computation of a normalising constant. In this paper we are concerned with how to recycle computations from a sensitivity analysis in order to approximately sample from joint posterior of the parameters and the discounting factor. We first show a few important properties of the normalising constant and then use these results to motivate a bisection-type algorithm for computing it on a fixed budget of evaluations. We give a large array of illustrations and discuss cases where the normalising constant is known in closed-form and where it is not. We show that the proposed method produces approximate posteriors that are very close to the exact distributions when those are available and also produces posteriors that cover the data-generating parameters with higher probability in the intractable case. Our results show that proper inclusion the normalising constant is crucial to the correct quantification of uncertainty and that the proposed method is an accurate and easy to implement technique to include this normalisation, being applicable to a large class of models. Key-words: Doubly-intractable; elicitation; historical data; normalisation; power prior; sensitivity analysis.
Background and objective: The stepped wedge cluster randomized trial is a study design increasingly used for public health intervention evaluations. Most previous literature focuses on power calculations for this particular type of cluster randomized
The relationship between short-term exposure to air pollution and mortality or morbidity has been the subject of much recent research, in which the standard method of analysis uses Poisson linear or additive models. In this paper we use a Bayesian dy
When considering a genetic disease with variable age at onset (ex: diabetes , familial amyloid neuropathy, cancers, etc.), computing the individual risk of the disease based on family history (FH) is of critical interest both for clinicians and patie
ROC analyses are considered under a variety of assumptions concerning the distributions of a measurement $X$ in two populations. These include the binormal model as well as nonparametric models where little is assumed about the form of distributions.
The ability to identify time periods when individuals are most susceptible to exposures, as well as the biological mechanisms through which these exposures act, is of great public health interest. Growing evidence supports an association between pren