ترغب بنشر مسار تعليمي؟ اضغط هنا

How to reduce the number of rating scale items without predictability loss?

99   0   0.0 ( 0 )
 نشر من قبل Waldemar Koczkodaj Prof.
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Rating scales are used to elicit data about qualitative entities (e.g., research collaboration). This study presents an innovative method for reducing the number of rating scale items without the predictability loss. The area under the receiver operator curve method (AUC ROC) is used. The presented method has reduced the number of rating scale items (variables) to 28.57% (from 21 to 6) making over 70% of collected data unnecessary. Results have been verified by two methods of analysis: Graded Response Model (GRM) and Confirmatory Factor Analysis (CFA). GRM revealed that the new method differentiates observations of high and middle scores. CFA proved that the reliability of the rating scale has not deteriorated by the scale item reduction. Both statistical analysis evidenced usefulness of the AUC ROC reduction method.



قيم البحث

اقرأ أيضاً

In some clinical studies, researchers may report the five number summary (including the sample median, the first and third quartiles, and the minimum and maximum values) rather than the sample mean and standard deviation. To conduct meta-analysis for pooling studies, one needs to first estimate the sample mean and standard deviation from the five number summary. A number of studies have been proposed in the recent literature to solve this problem. However, none of the existing estimators for the standard deviation is satisfactory for practical use. After a brief review of the existing literature, we point out that Wan et al.s method (BMC Med Res Methodol 14:135, 2014) has a serious limitation in estimating the standard deviation from the five number summary. To improve it, we propose a smoothly weighted estimator by incorporating the sample size information and derive the optimal weight for the new estimator. For ease of implementation, we also provide an approximation formula of the optimal weight and a shortcut formula for estimating the standard deviation from the five number summary. The performance of the proposed estimator is evaluated through two simulation studies. In comparison with Wan et al.s estimator, our new estimator provides a more accurate estimate for normal data and performs favorably for non-normal data. In real data analysis, our new method is also able to provide a more accurate estimate of the true sample standard deviation than the existing method. In this paper, we propose an optimal estimator of the standard deviation from the five number summary. Together with the optimal mean estimator in Luo et al. (Stat Methods Med Res, in press, 2017), our new methods have improved the existing literature and will make a solid contribution to meta-analysis and evidence-based medicine.
In this paper, we consider the problem of reducing the semitotal domination number of a given graph by contracting $k$ edges, for some fixed $k geq 1$. We show that this can always be done with at most 3 edge contractions and further characterise tho se graphs requiring 1, 2 or 3 edge contractions, respectively, to decrease their semitotal domination number. We then study the complexity of the problem for $k=1$ and obtain in particular a complete complexity dichotomy for monogenic classes.
Following criticisms against the journal Impact Factor, new journal influence scores have been developed such as the Eigenfactor or the Prestige Scimago Journal Rank. They are based on PageRank type algorithms on the cross-citations transition matrix of the citing-cited network. The PageRank algorithm performs a smoothing of the transition matrix combining a random walk on the data network and a teleportation to all possible nodes with fixed probabilities (the damping factor being $alpha= 0.85$). We reinterpret this smoothing matrix as the mean of a posterior distribution of a Dirichlet-multinomial model in an empirical Bayes perspective. We suggest a simple yet efficient way to make a clear distinction between structural and sampling zeroes. This allows us to contrast cases with self-citations included or excluded to avoid overvalued journal bias. We estimate the model parameters by maximizing the marginal likelihood with a Majorize-Minimize algorithm. The procedure ends up with a score similar to the PageRank ones but with a damping factor depending on each journal. The procedures are illustrated with an example about cross-citations among 47 statistical journals studied by Varin et. al. (2016).
Frequent Item-set Mining (FIM), sometimes called Market Basket Analysis (MBA) or Association Rule Learning (ARL), are Machine Learning (ML) methods for creating rules from datasets of transactions of items. Most methods identify items likely to appea r together in a transaction based on the support (i.e. a minimum number of relative co-occurrence of the items) for that hypothesis. Although this is a good indicator to measure the relevance of the assumption that these items are likely to appear together, the phenomenon of very frequent items, referred to as ubiquitous items, is not addressed in most algorithms. Ubiquitous items have the same entropy as infrequent items, and not contributing significantly to the knowledge. On the other hand, they have strong effect on the performance of the algorithms and sometimes preventing the convergence of the FIM algorithms and thus the provision of meaningful results. This paper discusses the phenomenon of ubiquitous items and demonstrates how ignoring these has a dramatic effect on the computation performances but with a low and controlled effect on the significance of the results.
Software design is one of the most important and key activities in the system development life cycle (SDLC) phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing softwa re. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO) paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK) design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metrics exists on total number of defects per module or not. This is achieved by conducting survey in two software development companies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا