Do you want to publish a course? Click here

Assurance for sample size determination in reliability demonstration testing

81   0   0.0 ( 0 )
 Added by Kevin Wilson Dr
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Manufacturers are required to demonstrate products meet reliability targets. A typical way to achieve this is with reliability demonstration tests (RDTs), in which a number of products are put on test and the test is passed if a target reliability is achieved. There are various methods for determining the sample size for RDTs, typically based on the power of a hypothesis test following the RDT or risk criteria. Bayesian risk criteria approaches can conflate the choice of sample size and the analysis to be undertaken once the test has been conducted and rely on the specification of somewhat artificial acceptable and rejectable reliability levels. In this paper we offer an alternative approach to sample size determination based on the idea of assurance. This approach chooses the sample size to answer provide a certain probability that the RDT will result in a successful outcome. It separates the design and analysis of the RDT, allowing different priors for each. We develop the assurance approach for sample size calculations in RDTs for binomial and Weibull likelihoods and propose appropriate prior distributions for the design and analysis of the test. In each case, we illustrate the approach with an example based on real data.



rate research

Read More

The development of a new diagnostic test ideally follows a sequence of stages which, amongst other aims, evaluate technical performance. This includes an analytical validity study, a diagnostic accuracy study and an interventional clinical utility study. Current approaches to the design and analysis of the diagnostic accuracy study can suffer from prohibitively large sample sizes and interval estimates with undesirable properties. In this paper, we propose a novel Bayesian approach which takes advantage of information available from the analytical validity stage. We utilise assurance to calculate the required sample size based on the target width of a posterior probability interval and can choose to use or disregard the data from the analytical validity study when subsequently inferring measures of test accuracy. Sensitivity analyses are performed to assess the robustness of the proposed sample size to the choice of prior, and prior-data conflict is evaluated by comparing the data to the prior predictive distributions. We illustrate the proposed approach using a motivating real-life application involving a diagnostic test for ventilator associated pneumonia. Finally, we compare the properties of the proposed approach against commonly used alternatives. The results show that by making better use of existing data from earlier studies, the assurance-based approach can not only reduce the required sample size when compared to alternatives, but can also produce more reliable sample sizes for diagnostic accuracy studies.
This paper develops Bayesian sample size formulae for experiments comparing two groups. We assume the experimental data will be analysed in the Bayesian framework, where pre-experimental information from multiple sources can be represented into robust priors. In particular, such robust priors account for preliminary belief about the pairwise commensurability between parameters that underpin the historical and new experiments, to permit flexible borrowing of information. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances where the common variance in the new experiment is known or unknown. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our Bayesian sample size formulae in the setting of designing a clinical trial. Hypothetical data examples, motivated by a rare-disease trial with elicitation of expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.
One of the main goals of sequential, multiple assignment, randomized trials (SMART) is to find the most efficacious design embedded dynamic treatment regimes. The analysis method known as multiple comparisons with the best (MCB) allows comparison between dynamic treatment regimes and identification of a set of optimal regimes in the frequentist setting for continuous outcomes, thereby, directly addressing the main goal of a SMART. In this paper, we develop a Bayesian generalization to MCB for SMARTs with binary outcomes. Furthermore, we show how to choose the sample size so that the inferior embedded DTRs are screened out with a specified power. We compare log-odds between different DTRs using their exact distribution without relying on asymptotic normality in either the analysis or the power calculation. We conduct extensive simulation studies under two SMART designs and illustrate our methods application to the Adaptive Treatment for Alcohol and Cocaine Dependence (ENGAGE) trial.
We propose BaySize, a sample size calculator for phase I clinical trials using Bayesian models. BaySize applies the concept of effect size in dose finding, assuming the MTD is defined based on an equivalence interval. Leveraging a decision framework that involves composite hypotheses, BaySize utilizes two prior distributions, the fitting prior (for model fitting) and sampling prior (for data generation), to conduct sample size calculation under desirable statistical power. Look-up tables are generated to facilitate practical applications. To our knowledge, BaySize is the first sample size tool that can be applied to a broad range of phase I trial designs.
In many health domains such as substance-use, outcomes are often counts with an excessive number of zeros (EZ) - count data having zero counts at a rate significantly higher than that expected of a standard count distribution (e.g., Poisson). However, an important gap exists in sample size estimation methodology for planning sequential multiple assignment randomized trials (SMARTs) for comparing dynamic treatment regimens (DTRs) using longitudinal count data. DTRs, also known as treatment algorithms or adaptive interventions, mimic the individualized and evolving nature of patient care through the specification of decision rules guiding the type, timing and modality of delivery, and dosage of treatments to address the unique and changing needs of individuals. To close this gap, we develop a Monte Carlo-based approach to sample size estimation. A SMART for engaging alcohol and cocaine-dependent patients in treatment is used as motivation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا