ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Interplay Between Exposure Misclassification and Informative Cluster Size

45   0   0.0 ( 0 )
 نشر من قبل Glen McGee
 تاريخ النشر 2019
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we study the impact of exposure misclassification when cluster size is potentially informative (i.e., related to outcomes) and when misclassification is differential by cluster size. First, we show that misclassification in an exposure related to cluster size can induce informativeness when cluster size would otherwise be non-informative. Second, we show that misclassification that is differential by informative cluster size can not only attenuate estimates of exposure effects but even inflate or reverse the sign of estimates. To correct for bias in estimating marginal parameters, we propose two frameworks: (i) an observed likelihood approach for joint marginalized models of cluster size and outcomes and (ii) an expected estimating equations approach. Although we focus on estimating marginal parameters, a corollary is that the observed likelihood approach permits valid inference for conditional parameters as well. Using data from the Nurses Health Study II, we compare the results of the proposed correction methods when applied to motivating data on the multigenerational effect of in-utero diethylstilbestrol exposure on attention-deficit/hyperactivity disorder in 106,198 children of 47,450 nurses.

قيم البحث

اقرأ أيضاً

Background: There is increasing interest in approaches for analyzing the effect of exposure mixtures on health. A key issue is how to simultaneously analyze often highly collinear components of the mixture, which can create problems such as confoundi ng by co-exposure and co-exposure amplification bias (CAB). Evaluation of novel mixtures methods, typically using synthetic data, is critical to their ultimate utility. Objectives: This paper aims to answer two questions. How do causal models inform the interpretation of statistical models and the creation of synthetic data used to test them? Are novel mixtures methods susceptible to CAB? Methods: We use directed acyclic graphs (DAGs) and linear models to derive closed form solutions for model parameters to examine how underlying causal assumptions affect the interpretation of model results. Results: The same beta coefficients estimated by a statistical model can have different interpretations depending on the assumed causal structure. Similarly, the method used to simulate data can have implications for the underlying DAG (and vice versa), and therefore the identification of the parameter being estimated with an analytic approach. We demonstrate that methods that can reproduce results of linear regression, such as Bayesian kernel machine regression and the new quantile g-computation approach, will be subject to CAB. However, under some conditions, estimates of an overall effect of the mixture is not subject to CAB and even has reduced uncontrolled bias. Discussion: Just as DAGs encode a priori subject matter knowledge allowing identification of variable control needed to block analytic bias, we recommend explicitly identifying DAGs underlying synthetic data created to test statistical mixtures approaches. Estimates of the total effect of a mixture is an important but relatively underexplored topic that warrants further investigation.
104 - Takuya Ura 2016
This paper considers the instrumental variable quantile regression model (Chernozhukov and Hansen, 2005, 2013) with a binary endogenous treatment. It offers two identification results when the treatment status is not directly observed. The first resu lt is that, remarkably, the reduced-form quantile regression of the outcome variable on the instrumental variable provides a lower bound on the structural quantile treatment effect under the stochastic monotonicity condition (Small and Tan, 2007; DiNardo and Lee, 2011). This result is relevant, not only when the treatment variable is subject to misclassification, but also when any measurement of the treatment variable is not available. The second result is for the structural quantile function when the treatment status is measured with error; I obtain the sharp identified set by deriving moment conditions under widely-used assumptions on the measurement error. Furthermore, I propose an inference method in the presence of other covariates.
Instrumental variable methods are widely used in medical and social science research to draw causal conclusions when the treatment and outcome are confounded by unmeasured confounding variables. One important feature of such studies is that the instr umental variable is often applied at the cluster level, e.g., hospitals or physicians preference for a certain treatment where each hospital or physician naturally defines a cluster. This paper proposes to embed such observational instrumental variable data into a cluster-randomized encouragement experiment using statistical matching. Potential outcomes and causal assumptions underpinning the design are formalized and examined. Testing procedures for two commonly-used estimands, Fishers sharp null hypothesis and the pooled effect ratio, are extended to the current setting. We then introduce a novel cluster-heterogeneous proportional treatment effect model and the relevant estimand: the average cluster effect ratio. This new estimand is advantageous over the structural parameter in a constant proportional treatment effect model in that it allows treatment heterogeneity, and is advantageous over the pooled effect ratio estimand in that it is immune to Simpsons paradox. We develop an asymptotically valid randomization-based testing procedure for this new estimand based on solving a mixed integer quadratically-constrained optimization problem. The proposed design and inferential methods are applied to a study of the effect of using transesophageal echocardiography during CABG surgery on patients 30-day mortality rate.
The development of a new diagnostic test ideally follows a sequence of stages which, amongst other aims, evaluate technical performance. This includes an analytical validity study, a diagnostic accuracy study and an interventional clinical utility st udy. Current approaches to the design and analysis of the diagnostic accuracy study can suffer from prohibitively large sample sizes and interval estimates with undesirable properties. In this paper, we propose a novel Bayesian approach which takes advantage of information available from the analytical validity stage. We utilise assurance to calculate the required sample size based on the target width of a posterior probability interval and can choose to use or disregard the data from the analytical validity study when subsequently inferring measures of test accuracy. Sensitivity analyses are performed to assess the robustness of the proposed sample size to the choice of prior, and prior-data conflict is evaluated by comparing the data to the prior predictive distributions. We illustrate the proposed approach using a motivating real-life application involving a diagnostic test for ventilator associated pneumonia. Finally, we compare the properties of the proposed approach against commonly used alternatives. The results show that by making better use of existing data from earlier studies, the assurance-based approach can not only reduce the required sample size when compared to alternatives, but can also produce more reliable sample sizes for diagnostic accuracy studies.
Manufacturers are required to demonstrate products meet reliability targets. A typical way to achieve this is with reliability demonstration tests (RDTs), in which a number of products are put on test and the test is passed if a target reliability is achieved. There are various methods for determining the sample size for RDTs, typically based on the power of a hypothesis test following the RDT or risk criteria. Bayesian risk criteria approaches can conflate the choice of sample size and the analysis to be undertaken once the test has been conducted and rely on the specification of somewhat artificial acceptable and rejectable reliability levels. In this paper we offer an alternative approach to sample size determination based on the idea of assurance. This approach chooses the sample size to answer provide a certain probability that the RDT will result in a successful outcome. It separates the design and analysis of the RDT, allowing different priors for each. We develop the assurance approach for sample size calculations in RDTs for binomial and Weibull likelihoods and propose appropriate prior distributions for the design and analysis of the test. In each case, we illustrate the approach with an example based on real data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا