ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian sample size determination using commensurate priors to leverage pre-experimental data

362   0   0.0 ( 0 )
 نشر من قبل Haiyan Zheng
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper develops Bayesian sample size formulae for experiments comparing two groups. We assume the experimental data will be analysed in the Bayesian framework, where pre-experimental information from multiple sources can be represented into robust priors. In particular, such robust priors account for preliminary belief about the pairwise commensurability between parameters that underpin the historical and new experiments, to permit flexible borrowing of information. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances where the common variance in the new experiment is known or unknown. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our Bayesian sample size formulae in the setting of designing a clinical trial. Hypothetical data examples, motivated by a rare-disease trial with elicitation of expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.



قيم البحث

اقرأ أيضاً

The development of a new diagnostic test ideally follows a sequence of stages which, amongst other aims, evaluate technical performance. This includes an analytical validity study, a diagnostic accuracy study and an interventional clinical utility st udy. Current approaches to the design and analysis of the diagnostic accuracy study can suffer from prohibitively large sample sizes and interval estimates with undesirable properties. In this paper, we propose a novel Bayesian approach which takes advantage of information available from the analytical validity stage. We utilise assurance to calculate the required sample size based on the target width of a posterior probability interval and can choose to use or disregard the data from the analytical validity study when subsequently inferring measures of test accuracy. Sensitivity analyses are performed to assess the robustness of the proposed sample size to the choice of prior, and prior-data conflict is evaluated by comparing the data to the prior predictive distributions. We illustrate the proposed approach using a motivating real-life application involving a diagnostic test for ventilator associated pneumonia. Finally, we compare the properties of the proposed approach against commonly used alternatives. The results show that by making better use of existing data from earlier studies, the assurance-based approach can not only reduce the required sample size when compared to alternatives, but can also produce more reliable sample sizes for diagnostic accuracy studies.
One of the main goals of sequential, multiple assignment, randomized trials (SMART) is to find the most efficacious design embedded dynamic treatment regimes. The analysis method known as multiple comparisons with the best (MCB) allows comparison bet ween dynamic treatment regimes and identification of a set of optimal regimes in the frequentist setting for continuous outcomes, thereby, directly addressing the main goal of a SMART. In this paper, we develop a Bayesian generalization to MCB for SMARTs with binary outcomes. Furthermore, we show how to choose the sample size so that the inferior embedded DTRs are screened out with a specified power. We compare log-odds between different DTRs using their exact distribution without relying on asymptotic normality in either the analysis or the power calculation. We conduct extensive simulation studies under two SMART designs and illustrate our methods application to the Adaptive Treatment for Alcohol and Cocaine Dependence (ENGAGE) trial.
Manufacturers are required to demonstrate products meet reliability targets. A typical way to achieve this is with reliability demonstration tests (RDTs), in which a number of products are put on test and the test is passed if a target reliability is achieved. There are various methods for determining the sample size for RDTs, typically based on the power of a hypothesis test following the RDT or risk criteria. Bayesian risk criteria approaches can conflate the choice of sample size and the analysis to be undertaken once the test has been conducted and rely on the specification of somewhat artificial acceptable and rejectable reliability levels. In this paper we offer an alternative approach to sample size determination based on the idea of assurance. This approach chooses the sample size to answer provide a certain probability that the RDT will result in a successful outcome. It separates the design and analysis of the RDT, allowing different priors for each. We develop the assurance approach for sample size calculations in RDTs for binomial and Weibull likelihoods and propose appropriate prior distributions for the design and analysis of the test. In each case, we illustrate the approach with an example based on real data.
We propose BaySize, a sample size calculator for phase I clinical trials using Bayesian models. BaySize applies the concept of effect size in dose finding, assuming the MTD is defined based on an equivalence interval. Leveraging a decision framework that involves composite hypotheses, BaySize utilizes two prior distributions, the fitting prior (for model fitting) and sampling prior (for data generation), to conduct sample size calculation under desirable statistical power. Look-up tables are generated to facilitate practical applications. To our knowledge, BaySize is the first sample size tool that can be applied to a broad range of phase I trial designs.
89 - Mahsa Nadifar 2021
Many of the data, particularly in medicine and disease mapping are count. Indeed, the under or overdispersion problem in count data distrusts the performance of the classical Poisson model. For taking into account this problem, in this paper, we intr oduce a new Bayesian structured additive regression model, called gamma count, with enough flexibility in modeling dispersion. Setting convenient prior distributions on the model parameters is a momentous issue in Bayesian statistics that characterize the nature of our uncertainty parameters. Relying on a recently proposed class of penalized complexity priors, motivated from a general set of construction principles, we derive the prior structure. The model can be formulated as a latent Gaussian model, and consequently, we can carry out the fast computation by using the integrated nested Laplace approximation method. We investigate the proposed methodology simulation study. Different expropriate prior distribution are examined to provide reasonable sensitivity analysis. To explain the applicability of the proposed model, we analyzed two real-world data sets related to the larynx mortality cancer in Germany and the handball champions league.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا