ترغب بنشر مسار تعليمي؟ اضغط هنا

Many studies in information science have looked at the growth of science. In this study, we re-examine the question of the growth of science. To do this we (i) use current data up to publication year 2012 and (ii) analyse it across all disciplines an d also separately for the natural sciences and for the medical and health sciences. Furthermore, the data are analysed with an advanced statistical technique - segmented regression analysis - which can identify specific segments with similar growth rates in the history of science. The study is based on two different sets of bibliometric data: (1) The number of publications held as source items in the Web of Science (WoS, Thomson Reuters) per publication year and (2) the number of cited references in the publications of the source items per cited reference year. We have looked at the rate at which science has grown since the mid-1600s. In our analysis of cited references we identified three growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars and 8 to 9% to 2012.
Bornmann, Stefaner, de Moya Anegon, and Mutz (in press) have introduced a web application (www.excellencemapping.net) which is linked to both academic ranking lists published hitherto (e.g. the Academic Ranking of World Universities) as well as spati al visualization approaches. The web application visualizes institutional performance within specific subject areas as ranking lists and on custom tile-based maps. The new, substantially enhanced version of the web application and the multilevel logistic regression on which it is based are described in this paper. Scopus data were used which have been collected for the SCImago Institutions Ranking. Only those universities and research-focused institutions are considered that have published at least 500 articles, reviews and conference papers in the period 2006 to 2010 in a certain Scopus subject area. In the enhanced version, the effect of single covariates (such as the per capita GDP of a country in which an institution is located) on two performance metrics (best paper rate and best journal rate) is examined and visualized. A covariate-adjusted ranking and mapping of the institutions is produced in which the single covariates are held constant. The results on the performance of institutions can then be interpreted as if the institutions all had the same value (reference point) for the covariate in question. For example, those institutions can be identified worldwide showing a very good performance despite a bad financial situation in the corresponding country.
Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, in press). In this paper, we conceptualize P 100 and propose an improvement which we call P100_. Advantages and disadvantages of citation-rank indicators are noted.
The web application presented in this paper allows for an analysis to reveal centres of excellence in different fields worldwide using publication and citation data. Only specific aspects of institutional performance are taken into account and other aspects such as teaching performance or societal impact of research are not considered. Based on data gathered from Scopus, field-specific excellence can be identified in institutions where highly-cited papers have been frequently published. The web application combines both a list of institutions ordered by different indicator values and a map with circles visualizing indicator values for geocoded institutions. Compared to the mapping and ranking approaches introduced hitherto, our underlying statistics (multi-level models) are analytically oriented by allowing (1) the estimation of values for the number of excellent papers for an institution which are statistically more appropriate than the observed values; (2) the calculation of confidence intervals as measures of accuracy for the institutional citation impact; (3) the comparison of a single institution with an average institution in a subject area, and (4) the direct comparison of at least two institutions.
Percentiles have been established in bibliometrics as an important alternative to mean-based indicators for obtaining a normalized citation impact of publications. Percentiles have a number of advantages over standard bibliometric indicators used fre quently: for example, their calculation is not based on the arithmetic mean which should not be used for skewed bibliometric data. This study describes the opportunities and limits and the advantages and disadvantages of using percentiles in bibliometrics. We also address problems in the calculation of percentiles and percentile rank classes for which there is not (yet) a satisfactory solution. It will be hard to compare the results of different percentile-based studies with each other unless it is clear that the studies were done with the same choices for percentile calculation and rank assignment.
In the year 2005 Jorge Hirsch introduced the h index for quantifying the research output of scientists. Today, the h index is a widely accepted indicator of research performance. The h index has been criticized for its insufficient reliability - the ability to discriminate reliably between meaningful amounts of research performance. Taking as an example an extensive data set with bibliometric data on scientists working in the field of molecular biology, we compute h2 lower, h2 upper, and sRM values and present them as complementary approaches that improve the reliability of the h index research performance measurement.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا