ترغب بنشر مسار تعليمي؟ اضغط هنا

Flexible combination of multiple diagnostic biomarkers to improve diagnostic accuracy

57   0   0.0 ( 0 )
 نشر من قبل Tu Xu
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In medical research, it is common to collect information of multiple continuous biomarkers to improve the accuracy of diagnostic tests. Combining the measurements of these biomarkers into one single score is a popular practice to integrate the collected information, where the accuracy of the resultant diagnostic test is usually improved. To measure the accuracy of a diagnostic test, the Youden index has been widely used in literature. Various parametric and nonparametric methods have been proposed to linearly combine biomarkers so that the corresponding Youden index can be optimized. Yet there seems to be little justification of enforcing such a linear combination. This paper proposes a flexible approach that allows both linear and nonlinear combinations of biomarkers. The proposed approach formulates the problem in a large margin classification framework, where the combination function is embedded in a flexible reproducing kernel Hilbert space. Advantages of the proposed approach are demonstrated in a variety of simulated experiments as well as a real application to a liver disorder study.

قيم البحث

اقرأ أيضاً

The development of a new diagnostic test ideally follows a sequence of stages which, amongst other aims, evaluate technical performance. This includes an analytical validity study, a diagnostic accuracy study and an interventional clinical utility st udy. Current approaches to the design and analysis of the diagnostic accuracy study can suffer from prohibitively large sample sizes and interval estimates with undesirable properties. In this paper, we propose a novel Bayesian approach which takes advantage of information available from the analytical validity stage. We utilise assurance to calculate the required sample size based on the target width of a posterior probability interval and can choose to use or disregard the data from the analytical validity study when subsequently inferring measures of test accuracy. Sensitivity analyses are performed to assess the robustness of the proposed sample size to the choice of prior, and prior-data conflict is evaluated by comparing the data to the prior predictive distributions. We illustrate the proposed approach using a motivating real-life application involving a diagnostic test for ventilator associated pneumonia. Finally, we compare the properties of the proposed approach against commonly used alternatives. The results show that by making better use of existing data from earlier studies, the assurance-based approach can not only reduce the required sample size when compared to alternatives, but can also produce more reliable sample sizes for diagnostic accuracy studies.
We study the effects of Horndeski models of dark energy on the observables of the large-scale structure in the late time universe. A novel classification into {it Late dark energy}, {it Early dark energy} and {it Early modified gravity} scenarios is proposed, according to whether such models predict deviations from the standard paradigm persistent at early time in the matter domination epoch. We discuss the physical imprints left by each specific class of models on the effective Newton constant $mu$, the gravitational slip parameter $eta$, the light deflection parameter $Sigma$ and the growth function $fsigma_8$ and demonstrate that a convenient way to dress a complete portrait of the viability of the Horndeski accelerating mechanism is via two, redshift-dependent, diagnostics: the $mu(z)-Sigma(z)$ and the $fsigma_8(z)-Sigma(z)$ planes. If future, model-independent, measurements point to either $Sigma-1<0$ at redshift zero or $mu-1<0$ with $Sigma-1>0$ at high redshifts or $mu-1>0$ with $Sigma-1<0$ at high redshifts, Horndeski theories are effectively ruled out. If $fsigma_8$ is measured to be larger than expected in a $Lambda$CDM model at $z>1.5$ then Early dark energy models are definitely ruled out. On the opposite case, Late dark energy models are rejected by data if $Sigma<1$, while, if $Sigma>1$, only Early modifications of gravity provide a viable framework to interpret data.
While difference-in-differences (DID) was originally developed with one pre- and one post-treatment periods, data from additional pre-treatment periods is often available. How can researchers improve the DID design with such multiple pre-treatment pe riods under what conditions? We first use potential outcomes to clarify three benefits of multiple pre-treatment periods: (1) assessing the parallel trends assumption, (2) improving estimation accuracy, and (3) allowing for a more flexible parallel trends assumption. We then propose a new estimator, double DID, which combines all the benefits through the generalized method of moments and contains the two-way fixed effects regression as a special case. In a wide range of applications where several pre-treatment periods are available, the double DID improves upon the standard DID both in terms of identification and estimation accuracy. We also generalize the double DID to the staggered adoption design where different units can receive the treatment in different time periods. We illustrate the proposed method with two empirical applications, covering both the basic DID and staggered adoption designs. We offer an open-source R package that implements the proposed methodologies.
Muon beams of low emittance provide the basis for the intense, well-characterised neutrino beams of a neutrino factory and for multi-TeV lepton-antilepton collisions at a muon collider. The international Muon Ionization Cooling Experiment (MICE) has demonstrated the principle of ionization cooling, the technique by which it is proposed to reduce the phase-space volume occupied by the muon beam at such facilities. This paper documents the performance of the detectors used in MICE to measure the muon-beam parameters, and the physical properties of the liquid hydrogen energy absorber during running.
A framework is presented to model instances and degrees of local item dependence within the context of diagnostic classification models (DCMs). The study considers an undirected graphical model to describe dependent structure of test items and draws inference based on pseudo-likelihood. The new modeling framework explicitly addresses item interactions beyond those explained by latent classes and thus is more flexible and robust against the violation of local independence. It also facilitates concise interpretation of item relations by regulating complexity of a network underlying the test items. The viability and effectiveness are demonstrated via simulation and a real data example. Results from the simulation study suggest that the proposed methods adequately recover the model parameters in the presence of locally dependent items and lead to a substantial improvement in estimation accuracy compared to the standard DCM approach. The analysis of real data demonstrates that the graphical DCM provides a useful summary of item interactions in regards to the existence and extent of local dependence.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا