ترغب بنشر مسار تعليمي؟ اضغط هنا

Integration of Japanese Papers Into the DBLP Data Set

107   0   0.0 ( 0 )
 نشر من قبل Paul Christian Sommerhoff
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

If someone is looking for a certain publication in the field of computer science, the searching person is likely to use the DBLP to find the desired publication. The DBLP data set is continuously extended with new publications, or rather their metadata, for example the names of involved authors, the title and the publication date. While the size of the data set is already remarkable, specific areas can still be improved. The DBLP offers a huge collection of English papers because most papers concerning computer science are published in English. Nevertheless, there are official publications in other languages which are supposed to be added to the data set. One kind of these are Japanese papers. This diploma thesis will show a way to automatically process publication lists of Japanese papers and to make them ready for an import into the DBLP data set. Especially important are the problems along the way of processing, such as transcription handling and Personal Name Matching with Japanese names.

قيم البحث

اقرأ أيضاً

Japan is a unique country with a distinct cultural heritage, which is reflected in billions of historical documents that have been preserved. However, the change in Japanese writing system in 1900 made these documents inaccessible for the general pub lic. A major research project has been to make these historical documents accessible and understandable. An increasing amount of research has focused on the character recognition task and the location of characters on image, yet less research has focused on how to predict the sequential ordering of the characters. This is because sequence in classical Japanese is very different from modern Japanese. Ordering characters into a sequence is important for making the document text easily readable and searchable. Additionally, it is a necessary step for any kind of natural language processing on the data (e.g. machine translation, language modeling, and word embeddings). We explore a few approaches to the task of predicting the sequential ordering of the characters: one using simple hand-crafted rules, another using hand-crafted rules with adaptive thresholds, and another using a deep recurrent sequence model trained with teacher forcing. We provide a quantitative and qualitative comparison of these techniques as well as their distinct trade-offs. Our best-performing system has an accuracy of 98.65% and has a perfect accuracy on 49% of the books in our dataset, suggesting that the technique is able to predict the order of the characters well enough for many tasks.
The scale and scope of scholarly articles today are overwhelming human researchers who seek to timely digest and synthesize knowledge. In this paper, we seek to develop natural language processing (NLP) models to accelerate the speed of extraction of relationships from scholarly papers in social sciences, identify hypotheses from these papers, and extract the cause-and-effect entities. Specifically, we develop models to 1) classify sentences in scholarly documents in business and management as hypotheses (hypothesis classification), 2) classify these hypotheses as causal relationships or not (causality classification), and, if they are causal, 3) extract the cause and effect entities from these hypotheses (entity extraction). We have achieved high performance for all the three tasks using different modeling techniques. Our approach may be generalizable to scholarly documents in a wide range of social sciences, as well as other types of textual materials.
Using the ESO Telescope Bibliography database telbib, we have investigated the percentage of ESO data papers that were submitted to the arXiv/astro-ph e-print server and that are therefore free to read. Our study revealed an availability of up to 96% of telbib papers on arXiv over the years 2010 to 2017. We also compared the citation counts of arXiv vs. non-arXiv papers and found that on average, papers submitted to arXiv are cited 2.8 times more often than those not on arXiv. While simulations suggest that these findings are statistically significant, we cannot yet draw firm conclusions as to the main cause of these differences.
We demonstrate a comprehensive framework that accounts for citation dynamics of scientific papers and for the age distribution of references. We show that citation dynamics of scientific papers is nonlinear and this nonlinearity has far-reaching cons equences, such as diverging citation distributions and runaway papers. We propose a nonlinear stochastic dynamic model of citation dynamics based on link copying/redirection mechanism. The model is fully calibrated by empirical data and does not contain free parameters. This model can be a basis for quantitative probabilistic prediction of citation dynamics of individual papers and of the journal impact factor.
194 - Michael Golosovsky 2020
We study citation dynamics of the Physics, Economics, and Mathematics papers published in 1984 and focus on the fraction of uncited papers in these three collections. Our model of citation dynamics, which considers citation process as an inhomogeneou s Poisson process, captures this uncitedness ratio fairly well. It should be noted that all parameters and variables in our model are related to citations and their dynamics, while uncited papers appear as a byproduct of the citation process and this is the Poisson statistics which makes the cited and uncited papers inseparable. This indicates that the most part of uncited papers constitute the inherent part of the scientific enterprise, namely, uncited papers are not unread.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا