Do you want to publish a course? Click here

Sets of Priors Reflecting Prior-Data Conflict and Agreement

102   0   0.0 ( 0 )
 Added by Gero Walter
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

In Bayesian statistics, the choice of prior distribution is often debatable, especially if prior knowledge is limited or data are scarce. In imprecise probability, sets of priors are used to accurately model and reflect prior knowledge. This has the advantage that prior-data conflict sensitivity can be modelled: Ranges of posterior inferences should be larger when prior and data are in conflict. We propose a new method for generating prior sets which, in addition to prior-data conflict sensitivity, allows to reflect strong prior-data agreement by decreased posterior imprecision.



rate research

Read More

Any Bayesian analysis involves combining information represented through different model components, and when different sources of information are in conflict it is important to detect this. Here we consider checking for prior-data conflict in Bayesian models by expanding the prior used for the analysis into a larger family of priors, and considering a marginal likelihood score statistic for the expansion parameter. Consideration of different expansions can be informative about the nature of any conflict, and extensions to hierarchically specified priors and connections with other approaches to prior-data conflict checking are discussed. Implementation in complex situations is illustrated with two applications. The first concerns testing for the appropriateness of a LASSO penalty in shrinkage estimation of coefficients in linear regression. Our method is compared with a recent suggestion in the literature designed to be powerful against alternatives in the exponential power family, and we use this family as the prior expansion for constructing our check. A second application concerns a problem in quantum state estimation, where a multinomial model is considered with physical constraints on the model parameters. In this example, the usefulness of different prior expansions is demonstrated for obtaining checks which are sensitive to different aspects of the prior.
In reliability engineering, data about failure events is often scarce. To arrive at meaningful estimates for the reliability of a system, it is therefore often necessary to also include expert information in the analysis, which is straightforward in the Bayesian approach by using an informative prior distribution. A problem called prior-data conflict then can arise: observed data seem very surprising from the viewpoint of the prior, i.e., information from data is in conflict with prior assumptions. Models based on conjugate priors can be insensitive to prior-data conflict, in the sense that the spread of the posterior distribution does not increase in case of such a conflict, thus conveying a false sense of certainty. An approach to mitigate this issue is presented, by considering sets of prior distributions to model limited knowledge on Weibull distributed component lifetimes, treating systems with arbitrary layout using the survival signature. This approach can be seen as a robust Bayesian procedure or imprecise probability method that reflects surprisingly early or late component failures by wider system reliability bounds.
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data. Following the manifold hypothesis and adopting a generative modelling approach, we construct a data-driven prior that is supported on a sub-manifold of the ambient space, which we can learn from the training data by using a variational autoencoder or a generative adversarial network. We establish the existence and well-posedness of the associated posterior distribution and posterior moments under easily verifiable conditions, providing a rigorous underpinning for Bayesian estimators and uncertainty quantification analyses. Bayesian computation is performed by using a parallel tempered version of the preconditioned Crank-Nicolson algorithm on the manifold, which is shown to be ergodic and robust to the non-convex nature of these data-driven models. In addition to point estimators and uncertainty quantification analyses, we derive a model misspecification test to automatically detect situations where the data-driven prior is unreliable, and explain how to identify the dimension of the latent space directly from the training data. The proposed approach is illustrated with a range of experiments with the MNIST dataset, where it outperforms alternative image reconstruction approaches from the state of the art. A model accuracy analysis suggests that the Bayesian probabilities reported by the data-driven models are also remarkably accurate under a frequentist definition of probability.
This paper develops Bayesian sample size formulae for experiments comparing two groups. We assume the experimental data will be analysed in the Bayesian framework, where pre-experimental information from multiple sources can be represented into robust priors. In particular, such robust priors account for preliminary belief about the pairwise commensurability between parameters that underpin the historical and new experiments, to permit flexible borrowing of information. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances where the common variance in the new experiment is known or unknown. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our Bayesian sample size formulae in the setting of designing a clinical trial. Hypothetical data examples, motivated by a rare-disease trial with elicitation of expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.
82 - Liyun Jiang , Lei Nie , Ying Yuan 2020
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is how to effectively borrow information from historical data while maintaining a reasonable type I error. We propose the elastic prior approach to address this challenge and achieve dynamic information borrowing. Unlike existing approaches, this method proactively controls the behavior of dynamic information borrowing and type I errors by incorporating a well-known concept of clinically meaningful difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of information-borrowing constraints prespecified by researchers or regulatory agencies, such that the prior will borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. In doing so, the elastic prior improves power and reduces the risk of data dredging and bias. The elastic prior is information borrowing consistent, i.e. asymptotically controls type I and II errors at the nominal values when historical data and trial data are not congruent, a unique characteristics of the elastic prior approach. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا