ترغب بنشر مسار تعليمي؟ اضغط هنا

Using Full-text Content of Academic Articles to Build a Methodology Taxonomy of Information Science in China

111   0   0.0 ( 0 )
 نشر من قبل Chengzhi Zhang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Research on the construction of traditional information science methodology taxonomy is mostly conducted manually. From the limited corpus, researchers have attempted to summarize some of the research methodology entities into several abstract levels (generally three levels); however, they have been unable to provide a more granular hierarchy. Moreover, updating the methodology taxonomy is traditionally a slow process. In this study, we collected full-text academic papers related to information science. First, we constructed a basic methodology taxonomy with three levels by manual annotation. Then, the word vectors of the research methodology entities were trained using the full-text data. Accordingly, the research methodology entities were clustered and the basic methodology taxonomy was expanded using the clustering results to obtain a methodology taxonomy with more levels. This study provides new concepts for constructing a methodology taxonomy of information science. The proposed methodology taxonomy is semi-automated; it is more detailed than conventional schemes and the speed of taxonomy renewal has been enhanced.

قيم البحث

اقرأ أيضاً

In the era of big data, the advancement, improvement, and application of algorithms in academic research have played an important role in promoting the development of different disciplines. Academic papers in various disciplines, especially computer science, contain a large number of algorithms. Identifying the algorithms from the full-text content of papers can determine popular or classical algorithms in a specific field and help scholars gain a comprehensive understanding of the algorithms and even the field. To this end, this article takes the field of natural language processing (NLP) as an example and identifies algorithms from academic papers in the field. A dictionary of algorithms is constructed by manually annotating the contents of papers, and sentences containing algorithms in the dictionary are extracted through dictionary-based matching. The number of articles mentioning an algorithm is used as an indicator to analyze the influence of that algorithm. Our results reveal the algorithm with the highest influence in NLP papers and show that classification algorithms represent the largest proportion among the high-impact algorithms. In addition, the evolution of the influence of algorithms reflects the changes in research tasks and topics in the field, and the changes in the influence of different algorithms show different trends. As a preliminary exploration, this paper conducts an analysis of the impact of algorithms mentioned in the academic text, and the results can be used as training data for the automatic extraction of large-scale algorithms in the future. The methodology in this paper is domain-independent and can be applied to other domains.
The application of mathematics and statistical methods to scholarly communication: scientometrics, has facilitated the systematic analysis of the modern digital tide of literature. This chapter reviews three of such applications: coauthorship, biblio graphic coupling, and coword networks. It also presents an exploratory case of study for the knowledge circulation literature. It was found a diverse geographical production, mainly in the Global North and Asian institutions with significant intermediation of universities from USA, Colombia, and Japan. The research fronts identified were related to science and medicines history and philosophy; education, health, policy studies; and a set of interdisciplinary topics. Finally, the knowledge pillars were comprised of urban planning policy, economic geography, and historical and theoretical perspectives in the Netherlands and Central Europe; globalization and science, technology, and innovation and historical and institutional frameworks in the UK; and cultural and learning studies in the XXI century.
153 - Cristel Chandre 2021
Open Science, Reproducible Research, Findable, Accessible, Interoperable and Reusable (FAIR) data principles are long term goals for scientific dissemination. However, the implementation of these principles calls for a reinspection of our means of di ssemination. In our viewpoint, we discuss and advocate, in the context of nonlinear science, how a notebook article represents an essential step toward this objective by fully embracing cloud computing solutions. Notebook articles as scholar articles offer an alternative, efficient and more ethical way to disseminate research through their versatile environment. This format invites the readers to delve deeper into the reported research. Through the interactivity of the notebook articles, research results such as for instance equations and figures are reproducible even for non-expert readers. The codes and methods are available, in a transparent manner, to interested readers. The methods can be reused and adapted to answer additional questions in related topics. The codes run on cloud computing services, which provide easy access, even to low-income countries and research groups. The versatility of this environment provides the stakeholders - from the researchers to the publishers - with opportunities to disseminate the research results in innovative ways.
With the rapid evolution of cross-strait situation, Mainland China as a subject of social science study has evoked the voice of Rethinking China Study among intelligentsia recently. This essay tried to apply an automatic content analysis tool (CATAR) to the journal Mainland China Studies (1998-2015) in order to observe the research trends based on the clustering of text from the title and abstract of each paper in the journal. The results showed that the 473 articles published by the journal were clustered into seven salient topics. From the publication number of each topic over time (including volume of publications, percentage of publications), there are two major topics of this journal while other topics varied over time widely. The contribution of this study includes: 1. We could group each independent study into a meaningful topic, as a small scale experiment verified that this topic clustering is feasible. 2. This essay reveals the salient research topics and their trends for the Taiwan journal Mainland China Studies. 3. Various topical keywords were identified, providing easy access to the past study. 4. The yearly trends of the identified topics could be viewed as signature of future research directions.
Trade and investment between developing regions such as China and Latin America (LATAM) are growing prominently. However, insights on crucial factors such as innovation in business and management (iBM) about both regions have not been scrutinized. Th is study presents the research output, impact, and structure of iBM research published about China and LATAM in a comparative framework using Google Scholar, Dimensions, and Microsoft Academic. Findings showed i) that iBM topics of both regions were framed within research and development management, and technological development topics, ii) significant differences in output and impact between regions, and iii) the same case for platforms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا