ترغب بنشر مسار تعليمي؟ اضغط هنا

An Aggregation Scheme for Increased Power in Primary Outcome Analysis

69   0   0.0 ( 0 )
 نشر من قبل Timothy Lycurgus
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

A novel aggregation scheme increases power in randomized controlled trials and quasi-experiments when the intervention possesses a robust and well-articulated theory of change. Longitudinal data analyzing interventions often include multiple observations on individuals, some of which may be more likely to manifest a treatment effect than others. An interventions theory of change provides guidance as to which of those observations are best situated to exhibit that treatment effect. Our power-maximizing weighting for repeated-measurements with delayed-effects scheme, PWRD aggregation, converts the theory of change into a test statistic with improved Pitman efficiency, delivering tests with greater statistical power. We illustrate this method on an IES-funded cluster randomized trial testing the efficacy of a reading intervention designed to assist early elementary students at risk of falling behind their peers. The salient theory of change holds program benefits to be delayed and non-uniform, experienced after a students performance stalls. This intervention is not found to have an effect, but the PWRD techniques effect on power is found to be comparable to that of a doubling of (cluster-level) sample size.

قيم البحث

اقرأ أيضاً

Ideally, a meta-analysis will summarize data from several unbiased studies. Here we consider the less than ideal situation in which contributing studies may be compromised by measurement error. Measurement error affects every study design, from rando mized controlled trials to retrospective observational studies. We outline a flexible Bayesian framework for continuous outcome data which allows one to obtain appropriate point and interval estimates with varying degrees of prior knowledge about the magnitude of the measurement error. We also demonstrate how, if individual-participant data (IPD) are available, the Bayesian meta-analysis model can adjust for multiple participant-level covariates, measured with or without measurement error.
We develop the scale transformed power prior for settings where historical and current data involve different data types, such as binary and continuous data, respectively. This situation arises often in clinical trials, for example, when historical d ata involve binary responses and the current data involve time-to-event or some other type of continuous or discrete outcome. The power prior proposed by Ibrahim and Chen (2000) does not address the issue of different data types. Herein, we develop a new type of power prior, which we call the scale transformed power prior (straPP). The straPP is constructed by transforming the power prior for the historical data by rescaling the parameter using a function of the Fisher information matrices for the historical and current data models, thereby shifting the scale of the parameter vector from that of the historical to that of the current data. Examples are presented to motivate the need for a scale transformation and simulation studies are presented to illustrate the performance advantages of the straPP over the power prior and other informative and non-informative priors. A real dataset from a clinical trial undertaken to study a novel transitional care model for stroke survivors is used to illustrate the methodology.
Multi-criteria decision analysis (MCDA) is a quantitative approach to the drug benefit-risk assessment (BRA) which allows for consistent comparisons by summarising all benefits and risks in a single score. The MCDA consists of several components, one of which is the utility (or loss) score function that defines how benefits and risks are aggregated into a single quantity. While a linear utility score is one of the most widely used approach in BRA, it is recognised that it can result in counter-intuitive decisions, for example, recommending a treatment with extremely low benefits or high risks. To overcome this problem, alternative approaches to the scores construction, namely, product, multi-linear and Scale Loss Score models, were suggested. However, to date, the majority of arguments concerning the differences implied by these models are heuristic. In this work, we consider four models to calculate the aggregated utility/loss scores and compared their performance in an extensive simulation study over many different scenarios, and in a case study. It is found that the product and Scale Loss Score models provide more intuitive treatment recommendation decisions in the majority of scenarios compared to the linear and multi-linear models, and are more robust to the correlation in the criteria.
87 - Peng Liu , Yusi Fang , Zhao Ren 2020
High-throughput microarray and sequencing technology have been used to identify disease subtypes that could not be observed otherwise by using clinical variables alone. The classical unsupervised clustering strategy concerns primarily the identificat ion of subpopulations that have similar patterns in gene features. However, as the features corresponding to irrelevant confounders (e.g. gender or age) may dominate the clustering process, the resulting clusters may or may not capture clinically meaningful disease subtypes. This gives rise to a fundamental problem: can we find a subtyping procedure guided by a pre-specified disease outcome? Existing methods, such as supervised clustering, apply a two-stage approach and depend on an arbitrary number of selected features associated with outcome. In this paper, we propose a unified latent generative model to perform outcome-guided disease subtyping constructed from omics data, which improves the resulting subtypes concerning the disease of interest. Feature selection is embedded in a regularization regression. A modified EM algorithm is applied for numerical computation and parameter estimation. The proposed method performs feature selection, latent subtype characterization and outcome prediction simultaneously. To account for possible outliers or violation of mixture Gaussian assumption, we incorporate robust estimation using adaptive Huber or median-truncated loss function. Extensive simulations and an application to complex lung diseases with transcriptomic and clinical data demonstrate the ability of the proposed method to identify clinically relevant disease subtypes and signature genes suitable to explore toward precision medicine.
375 - Sai Li , Zijian Guo 2020
Instrumental variable methods are widely used for inferring the causal effect of an exposure on an outcome when the observed relationship is potentially affected by unmeasured confounders. Existing instrumental variable methods for nonlinear outcome models require stringent identifiability conditions. We develop a robust causal inference framework for nonlinear outcome models, which relaxes the conventional identifiability conditions. We adopt a flexible semi-parametric potential outcome model and propose new identifiability conditions for identifying the model parameters and causal effects. We devise a novel three-step inference procedure for the conditional average treatment effect and establish the asymptotic normality of the proposed point estimator. We construct confidence intervals for the causal effect by the bootstrap method. The proposed method is demonstrated in a large set of simulation studies and is applied to study the causal effects of lipid levels on whether the glucose level is normal or high over a mice dataset.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا