Do you want to publish a course? Click here

Connecting web-based mapping services with scientific data repositories: collaborative curation and retrieval of simulation data via a geospatial interface

296   0   0.0 ( 0 )
 Added by Christian Jacobs
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Increasing quantities of scientific data are becoming readily accessible via online repositories such as those provided by Figshare and Zenodo. Geoscientific simulations in particular generate large quantities of data, with several research groups studying many, often overlapping areas of the world. When studying a particular area, being able to keep track of ones own simulations as well as those of collaborators can be challenging. This paper describes the design, implementation, and evaluation of a new tool for visually cataloguing and retrieving data associated with a given geographical location through a web-based Google Maps interface. Each data repository is pin-pointed on the map with a marker based on the geographical location that the dataset corresponds to. By clicking on the markers, users can quickly inspect the metadata of the repositories and download the associated data files. The crux of the approach lies in the ability to easily query and retrieve data from multiple sources via a common interface. While many advances are being made in terms of scientific data repositories, the development of this new tool has uncovered several issues and limitations of the current state-of-the-art which are discussed herein, along with some ideas for the future.



rate research

Read More

In all domains and sectors, the demand for intelligent systems to support the processing and generation of digital content is rapidly increasing. The availability of vast amounts of content and the pressure to publish new content quickly and in rapid succession requires faster, more efficient and smarter processing and generation methods. With a consortium of ten partners from research and industry and a broad range of expertise in AI, Machine Learning and Language Technologies, the QURATOR project, funded by the German Federal Ministry of Education and Research, develops a sustainable and innovative technology platform that provides services to support knowledge workers in various industries to address the challenges they face when curating digital content. The projects vision and ambition is to establish an ecosystem for content curation technologies that significantly pushes the current state of the art and transforms its region, the metropolitan area Berlin-Brandenburg, into a global centre of excellence for curation technologies.
The data paper, an emerging scholarly genre, describes research datasets and is intended to bridge the gap between the publication of research data and scientific articles. Research examining how data papers report data events, such as data transactions and manipulations, is limited. The research reported on in this paper addresses this limitation and investigated how data events are inscribed in data papers. A content analysis was conducted examining the full texts of 82 data papers, drawn from the curated list of data papers connected to the Global Biodiversity Information Facility (GBIF). Data events recorded for each paper were organized into a set of 17 categories. Many of these categories are described together in the same sentence, which indicates the messiness of data events in the laboratory space. The findings challenge the degrees to which data papers are a distinct genre compared to research papers and they describe data-centric research processes in a through way. This paper also discusses how our results could inform a better data publication ecosystem in the future.
112 - Rosa Alarcon , Erik Wilde 2010
RESTful services on the Web expose information through retrievable resource representations that represent self-describing descriptions of resources, and through the way how these resources are interlinked through the hyperlinks that can be found in those representations. This basic design of RESTful services means that for extracting the most useful information from a service, it is necessary to understand a services representations, which means both the semantics in terms of describing a resource, and also its semantics in terms of describing its linkage with other resources. Based on the Resource Linking Language (ReLL), this paper describes a framework for how RESTful services can be described, and how these descriptions can then be used to harvest information from these services. Building on this framework, a layered model of RESTful service semantics allows to represent a services information in RDF/OWL. Because REST is based on the linkage between resources, the same model can be used for aggregating and interlinking multiple services for extracting RDF data from sets of RESTful services.
This paper analyses the potential use of bibliometric data for mapping and applying network analysis to mobility flows. We show case mobility networks at three different levels of aggregation: at the country level, at the city level and at the institutional level. We reflect on the potential uses of bibliometric data to inform research policies with regard to scientific mobility.
Numerous digital humanities projects maintain their data collections in the form of text, images, and metadata. While data may be stored in many formats, from plain text to XML to relational databases, the use of the resource description framework (RDF) as a standardized representation has gained considerable traction during the last five years. Almost every digital humanities meeting has at least one session concerned with the topic of digital humanities, RDF, and linked data. While most existing work in linked data has focused on improving algorithms for entity matching, the aim of the LinkedHumanities project is to build digital humanities tools that work out of the box, enabling their use by humanities scholars, computer scientists, librarians, and information scientists alike. With this paper, we report on the Linked Open Data Enhancer (LODE) framework developed as part of the LinkedHumanities project. With LODE we support non-technical users to enrich a local RDF repository with high-quality data from the Linked Open Data cloud. LODE links and enhances the local RDF repository without compromising the quality of the data. In particular, LODE supports the user in the enhancement and linking process by providing intuitive user-interfaces and by suggesting high-quality linking candidates using tailored matching algorithms. We hope that the LODE framework will be useful to digital humanities scholars complementing other digital humanities tools.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا