ترغب بنشر مسار تعليمي؟ اضغط هنا

ADS 2.0: new architecture, API and services

298   0   0.0 ( 0 )
 نشر من قبل Roman Chyla
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The ADS platform is undergoing the biggest rewrite of its 20-year history. While several components have been added to its architecture over the past couple of years, this talk will concentrate on the underpinnings of ADSs search layer and its API. To illustrate the design of the components in the new system, we will show how the new ADS user interface is built exclusively on top of the API using RESTful web services. Taking one step further, we will discuss how we plan to expose the treasure trove of information hosted by ADS (10 million records and fulltext for much of the Astronomy and Physics refereed literature) to partners interested in using this API. This will provide you (and your intelligent applications) with access to ADSs underlying data to enable the extraction of new knowledge and the ingestion of these results back into the ADS. Using this framework, researchers could run controlled experiments with content extraction, machine learning, natural language processing, etc. In this talk, we will discuss what is already implemented, what will be available soon, and where we are going next.



قيم البحث

اقرأ أيضاً

The second quantum technological revolution started around 1980 with the control of single quantum particles and their interaction on an individual basis. These experimental achievements enabled physicists and engineers to utilize long-known quantum features - especially superposition and entanglement of single quantum states - for a whole range of practical applications. We use a publication set of 54,598 papers from the Web of Science published between 1980 and 2018 to investigate the time development of four main subfields of quantum technology in terms of numbers and shares of publication as well as the occurrence of topics and their relation to the 25 top contributing countries. Three successive time periods are distinguished in the analyses by their short doubling times in relation to the whole Web of Science. The periods can be characterized by the publication of pioneering works, the exploration of research topics, and the maturing of quantum technology, respectively. Compared to the US, China has a far over proportional contribution to the worldwide publication output, but not in the segment of highly-cited papers.
With over 20 million records, the ADS citation database is regularly used by researchers and librarians to measure the scientific impact of individuals, groups, and institutions. In addition to the traditional sources of citations, the ADS has recent ly added references extracted from the arXiv e-prints on a nightly basis. We review the procedures used to harvest and identify the reference data used in the creation of citations, the policies and procedures that we follow to avoid double-counting and to eliminate contributions which may not be scholarly in nature. Finally, we describe how users and institutions can easily obtain quantitative citation data from the ADS, both interactively and via web-based programming tools. The ADS is available at http://ads.harvard.edu.
Preprint is a version of a scientific paper that is publicly distributed preceding formal peer review. Since the launch of arXiv in 1991, preprints have been increasingly distributed over the Internet as opposed to paper copies. It allows open online access to disseminate the original research within a few days, often at a very low operating cost. This work overviews how preprint has been evolving and impacting the research community over the past thirty years alongside the growth of the Web. In this work, we first report that the number of preprints has exponentially increased 63 times in 30 years, although it only accounts for 4% of research articles. Second, we quantify the benefits that preprints bring to authors: preprints reach an audience 14 months earlier on average and associate with five times more citations compared with a non-preprint counterpart. Last, to address the quality concern of preprints, we discover that 41% of preprints are ultimately published at a peer-reviewed destination, and the published venues are as influential as papers without a preprint version. Additionally, we discuss the unprecedented role of preprints in communicating the latest research data during recent public health emergencies. In conclusion, we provide quantitative evidence to unveil the positive impact of preprints on individual researchers and the community. Preprints make scholarly communication more efficient by disseminating scientific discoveries more rapidly and widely with the aid of Web technologies. The measurements we present in this study can help researchers and policymakers make informed decisions about how to effectively use and responsibly embrace a preprint culture.
Increasing quantities of scientific data are becoming readily accessible via online repositories such as those provided by Figshare and Zenodo. Geoscientific simulations in particular generate large quantities of data, with several research groups st udying many, often overlapping areas of the world. When studying a particular area, being able to keep track of ones own simulations as well as those of collaborators can be challenging. This paper describes the design, implementation, and evaluation of a new tool for visually cataloguing and retrieving data associated with a given geographical location through a web-based Google Maps interface. Each data repository is pin-pointed on the map with a marker based on the geographical location that the dataset corresponds to. By clicking on the markers, users can quickly inspect the metadata of the repositories and download the associated data files. The crux of the approach lies in the ability to easily query and retrieve data from multiple sources via a common interface. While many advances are being made in terms of scientific data repositories, the development of this new tool has uncovered several issues and limitations of the current state-of-the-art which are discussed herein, along with some ideas for the future.
The 5G network systems are evolving and have complex network infrastructures. There is a great deal of work in this area focused on meeting the stringent service requirements for the 5G networks. Within this context, security requirements play a crit ical role as 5G networks can support a range of services such as healthcare services, financial and critical infrastructures. 3GPP and ETSI have been developing security frameworks for 5G networks. Our work in 5G security has been focusing on the design of security architecture and mechanisms enabling dynamic establishment of secure and trusted end to end services as well as development of mechanisms to proactively detect and mitigate security attacks in virtualised network infrastructures. The focus of this paper is on the latter, namely the facilities and mechanisms, and the design of a security architecture providing facilities and mechanisms to detect and mitigate specific security attacks. We have developed and implemented a simplified version of the security architecture using Software Defined Networks (SDN) and Network Function Virtualisation (NFV) technologies. The specific security functions developed in this architecture can be directly integrated into the 5G core network facilities enhancing its security. We describe the design and implementation of the security architecture and demonstrate how it can efficiently mitigate specific types of attacks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا