No Arabic abstract
We study citation dynamics of the Physics, Economics, and Mathematics papers published in 1984 and focus on the fraction of uncited papers in these three collections. Our model of citation dynamics, which considers citation process as an inhomogeneous Poisson process, captures this uncitedness ratio fairly well. It should be noted that all parameters and variables in our model are related to citations and their dynamics, while uncited papers appear as a byproduct of the citation process and this is the Poisson statistics which makes the cited and uncited papers inseparable. This indicates that the most part of uncited papers constitute the inherent part of the scientific enterprise, namely, uncited papers are not unread.
We demonstrate a comprehensive framework that accounts for citation dynamics of scientific papers and for the age distribution of references. We show that citation dynamics of scientific papers is nonlinear and this nonlinearity has far-reaching consequences, such as diverging citation distributions and runaway papers. We propose a nonlinear stochastic dynamic model of citation dynamics based on link copying/redirection mechanism. The model is fully calibrated by empirical data and does not contain free parameters. This model can be a basis for quantitative probabilistic prediction of citation dynamics of individual papers and of the journal impact factor.
Like it or not, attempts to evaluate and monitor the quality of academic research have become increasingly prevalent worldwide. Performance reviews range from at the level of individuals, through research groups and departments, to entire universities. Many of these are informed by, or functions of, simple scientometric indicators and the results of such exercises impact onto careers, funding and prestige. However, there is sometimes a failure to appreciate that scientometrics are, at best, very blunt instruments and their incorrect usage can be misleading. Rather than accepting the rise and fall of individuals and institutions on the basis of such imprecise measures, calls have been made for indicators be regularly scrutinised and for improvements to the evidence base in this area. It is thus incumbent upon the scientific community, especially the physics, complexity-science and scientometrics communities, to scrutinise metric indicators. Here, we review recent attempts to do this and show that some metrics in widespread use cannot be used as reliable indicators research quality.
If someone is looking for a certain publication in the field of computer science, the searching person is likely to use the DBLP to find the desired publication. The DBLP data set is continuously extended with new publications, or rather their metadata, for example the names of involved authors, the title and the publication date. While the size of the data set is already remarkable, specific areas can still be improved. The DBLP offers a huge collection of English papers because most papers concerning computer science are published in English. Nevertheless, there are official publications in other languages which are supposed to be added to the data set. One kind of these are Japanese papers. This diploma thesis will show a way to automatically process publication lists of Japanese papers and to make them ready for an import into the DBLP data set. Especially important are the problems along the way of processing, such as transcription handling and Personal Name Matching with Japanese names.
The scale and scope of scholarly articles today are overwhelming human researchers who seek to timely digest and synthesize knowledge. In this paper, we seek to develop natural language processing (NLP) models to accelerate the speed of extraction of relationships from scholarly papers in social sciences, identify hypotheses from these papers, and extract the cause-and-effect entities. Specifically, we develop models to 1) classify sentences in scholarly documents in business and management as hypotheses (hypothesis classification), 2) classify these hypotheses as causal relationships or not (causality classification), and, if they are causal, 3) extract the cause and effect entities from these hypotheses (entity extraction). We have achieved high performance for all the three tasks using different modeling techniques. Our approach may be generalizable to scholarly documents in a wide range of social sciences, as well as other types of textual materials.
The transition to a future electricity system based primarily on wind and solar PV is examined for all regions in the contiguous US. We present optimized pathways for the build-up of wind and solar power for least backup energy needs as well as for least cost obtained with a simplified, lightweight model based on long-term high resolution weather-determined generation data. In the absence of storage, the pathway which achieves the best match of generation and load, thus resulting in the least backup energy requirements, generally favors a combination of both technologies, with a wind/solar PV energy mix of about 80/20 in a fully renewable scenario. The least cost development is seen to start with 100% of the technology with the lowest average generation costs first, but with increasing renewable installations, economically unfavorable excess generation pushes it toward the minimal backup pathway. Surplus generation and the entailed costs can be reduced significantly by combining wind and solar power, and/or absorbing excess generation, for example with storage or transmission, or by coupling the electricity system to other energy sectors.