ترغب بنشر مسار تعليمي؟ اضغط هنا

Accuracy, precision, and agreement statistical tests for Bland-Altman method

193   0   0.0 ( 0 )
 نشر من قبل Paulo Sergio Panse Silveira
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Bland and Altman plot method is a graphical plot approach that compares related data sets, supporting the eventual replacement of a measurement method for another one. Perhaps due to its graphical easy output it had been widely applied, however often misinterpreted. We provide three nested tests: accuracy, precision and agreement, as a means to reach statistical support for the equivalence of measurements. These were based on structural regressions added to the method converting it on inferential statistical criteria, verifying mean equality (accuracy), homoscedasticity (precision), and concordance with a bisector line (agreement). A graphical output illustrating these three tests were added to follow Bland and Altmans principles. Five pairs of data sets from previously published articles that applied the Bland and Altmans principles illustrate this statistical approach. In one case it was demonstrated strict equivalence, three cases showed partial equivalence, and there was one case without equivalence. Here we show a statistical approach added to the graphical outputs that turns the Bland-Altman otherwise graphical subjective interpretation into a clear and objective result and with significance value for a reliable and better communicable decision.

قيم البحث

اقرأ أيضاً

35 - Tomokazu Konishi 2011
In contrast to its common definition and calculation, interpretation of p-values diverges among statisticians. Since p-value is the basis of various methodologies, this divergence has led to a variety of test methodologies and evaluations of test res ults. This chaotic situation has complicated the application of tests and decision processes. Here, the origin of the divergence is found in the prior probability of the test. Effects of difference in Pr(H0 = true) on the character of p-values are investigated by comparing real microarray data and its artificial imitations as subjects of Students t-tests. Also, the importance of the prior probability is discussed in terms of the applicability of Bayesian approaches. Suitable methodology is found in accordance with the prior probability and purpose of the test.
Measuring veracity or reliability of noisy data is of utmost importance, especially in the scenarios where the information are gathered through automated systems. In a recent paper, Chakraborty et. al. (2019) have introduced a veracity scoring techni que for geostatistical data. The authors have used a high-quality `reference data to measure the veracity of the varying-quality observations and incorporated the veracity scores in their analysis of mobile-sensor generated noisy weather data to generate efficient predictions of the ambient temperature process. In this paper, we consider the scenario when no reference data is available and hence, the veracity scores (referred as VS) are defined based on `local summaries of the observations. We develop a VS-based estimation method for parameters of a spatial regression model. Under a non-stationary noise structure and fairly general assumptions on the underlying spatial process, we show that the VS-based estimators of the regression parameters are consistent. Moreover, we establish the advantage of the VS-based estimators as compared to the ordinary least squares (OLS) estimator by analyzing their asymptotic mean squared errors. We illustrate the merits of the VS-based technique through simulations and apply the methodology to a real data set on mass percentages of ash in coal seams in Pennsylvania.
127 - Yan Zhou , Jiadi Zhu , Tiejun Tong 2018
Background: High-throughput techniques bring novel tools but also statistical challenges to genomic research. Identifying genes with differential expression between different species is an effective way to discover evolutionarily conserved transcript ional responses. To remove systematic variation between different species for a fair comparison, the normalization procedure serves as a crucial pre-processing step that adjusts for the varying sample sequencing depths and other confounding technical effects. Results: In this paper, we propose a scale based normalization (SCBN) method by taking into account the available knowledge of conserved orthologous genes and hypothesis testing framework. Considering the different gene lengths and unmapped genes between different species, we formulate the problem from the perspective of hypothesis testing and search for the optimal scaling factor that minimizes the deviation between the empirical and nominal type I errors. Conclusions: Simulation studies show that the proposed method performs significantly better than the existing competitor in a wide range of settings. An RNA-seq dataset of different species is also analyzed and it coincides with the conclusion that the proposed method outperforms the existing method. For practical applications, we have also developed an R package named SCBN and the software is available at http://www.bioconductor.org/packages/devel/bioc/html/SCBN.html.
126 - S. Ghigna 1996
We apply several statistical estimators to high-resolution N-body simulations of two currently viable cosmological models: a mixed dark matter model, having $Omega_ u=0.2$ contributed by two massive neutrinos (C+2 uDM), and a Cold Dark Matter model w ith Cosmological Constant (LambdaCDM) with $Omega_0=0.3$ and h=0.7. Our aim is to compare simulated galaxy samples with the Perseus-Pisces redshift survey (PPS). We consider the n-point correlation functions (n=2-4), the N-count probability functions P_N, including the void probability function P_0, and the underdensity probability function U_epsilon (where epsilon fixes the underdensity threshold in percentage of the average). We find that P_0 (for which PPS and CfA2 data agree) and P_1 distinguish efficiently between the models, while U_epsilon is only marginally discriminatory. On the contrary, the reduced skewness and kurtosis are, respectively, S_3simeq 2.2 and S_4simeq 6-7 in all cases, quite independent of the scale, in agreement with hierarchical scaling predictions and estimates based on redshift surveys. Among our results, we emphasize the remarkable agreement between PPS data and C+2 uDM in all the tests performed. In contrast, the above LambdaCDM model has serious difficulties in reproducing observational data if galaxies and matter overdensities are related in a simple way.
In Bayesian statistics, the choice of prior distribution is often debatable, especially if prior knowledge is limited or data are scarce. In imprecise probability, sets of priors are used to accurately model and reflect prior knowledge. This has the advantage that prior-data conflict sensitivity can be modelled: Ranges of posterior inferences should be larger when prior and data are in conflict. We propose a new method for generating prior sets which, in addition to prior-data conflict sensitivity, allows to reflect strong prior-data agreement by decreased posterior imprecision.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا