ترغب بنشر مسار تعليمي؟ اضغط هنا

Extraction Of Technical Information From Normative Documents Using Automated Methods Based On Ontologies : Application To The Iso 15531 Mandate Standard - Methodology And First Results

108   0   0.0 ( 0 )
 نشر من قبل Anne-Francoise Cutting-Decelle
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Problems faced by international standardization bodies become more and more crucial as the number and the size of the standards they produce increase. Sometimes, also, the lack of coordination among the committees in charge of the development of standards may lead to overlaps, mistakes or incompatibilities in the documents. The aim of this study is to present a methodology enabling an automatic extraction of the technical concepts (terms) found in normative documents, through the use of semantic tools coming from the field of language processing. The first part of the paper provides a description of the standardization world, its structure, its way of working and the problems faced; we then introduce the concepts of semantic annotation, information extraction and the software tools available in this domain. The next section explains the concept of ontology and its potential use in the field of standardization. We propose here a methodology enabling the extraction of technical information from a given normative corpus, based on a semantic annotation process done according to reference ontologies. The application to the ISO 15531 MANDATE corpus provides a first use case of the methodology described in this paper. The paper ends with the description of the first experimental results produced by this approach, and with some issues and perspectives, notably its application to other standards and, or Technical Committees and the possibility offered to create pre-defined technical dictionaries of terms.

قيم البحث

اقرأ أيضاً

115 - Yamine Ait Ameur 2018
This paper reports on the results of the French ANR IMPEX research project dealing with making explicit domain knowledge in design models. Ontologies are formalised as theories with sets, axioms, theorems and reasoning rules. They are integrated to d esign models through an annotation mechanism. Event-B has been chosen as the ground formal modelling technique for all our developments. In this paper, we particularly describe how ontologies are formalised as Event-B theories.
Production of news content is growing at an astonishing rate. To help manage and monitor the sheer amount of text, there is an increasing need to develop efficient methods that can provide insights into emerging content areas, and stratify unstructur ed corpora of text into `topics that stem intrinsically from content similarity. Here we present an unsupervised framework that brings together powerful vector embeddings from natural language processing with tools from multiscale graph partitioning that can reveal natural partitions at different resolutions without making a priori assumptions about the number of clusters in the corpus. We show the advantages of graph-based clustering through end-to-end comparisons with other popular clustering and topic modelling methods, and also evaluate different text vector embeddings, from classic Bag-of-Words to Doc2Vec to the recent transformers based model Bert. This comparative work is showcased through an analysis of a corpus of US news coverage during the presidential election year of 2016.
Information Extraction from visual documents enables convenient and intelligent assistance to end users. We present a Neighborhood-based Information Extraction (NIE) approach that uses contextual language models and pays attention to the local neighb orhood context in the visual documents to improve information extraction accuracy. We collect two different visual document datasets and show that our approach outperforms the state-of-the-art global context-based IE technique. In fact, NIE outperforms existing approaches in both small and large model sizes. Our on-device implementation of NIE on a mobile platform that generally requires small models showcases NIEs usefulness in practical real-world applications.
Modern software is developed under considerable time pressure, which implies that developers more often than not have to resort to compromises when it comes to code that is well written and code that just does the job. This has led over the past deca des to the concept of technical debt, a short-term hack that potentially generates long-term maintenance problems. Self-admitted technical debt (SATD) is a particular form of technical debt: developers consciously perform the hack but also document it in the code by adding comments as a reminder (or as an admission of guilt). We focus on a specific type of SATD, namely On-hold SATD, in which developers document in their comments the need to halt an implementation task due to conditions outside of their scope of work (e.g., an open issue must be closed before a function can be implemented). We present an approach, based on regular expressions and machine learning, which is able to detect issues referenced in code comments, and to automatically classify the detected instances as either On-hold (the issue is referenced to indicate the need to wait for its resolution before completing a task), or as cross-reference, (the issue is referenced to document the code, for example to explain the rationale behind an implementation choice). Our approach also mines the issue tracker of the projects to check if the On-hold SATD instances are superfluous and can be removed (i.e., the referenced issue has been closed, but the SATD is still in the code). Our evaluation confirms that our approach can indeed identify relevant instances of On-hold SATD. We illustrate its usefulness by identifying superfluous On-hold SATD instances in open source projects as confirmed by the original developers.
148 - Kaibo Cao 2021
As a popular Q&A site for programming, Stack Overflow is a treasure for developers. However, the amount of questions and answers on Stack Overflow make it difficult for developers to efficiently locate the information they are looking for. There are two gaps leading to poor search results: the gap between the users intention and the textual query, and the semantic gap between the query and the post content. Therefore, developers have to constantly reformulate their queries by correcting misspelled words, adding limitations to certain programming languages or platforms, etc. As query reformulation is tedious for developers, especially for novices, we propose an automated software-specific query reformulation approach based on deep learning. With query logs provided by Stack Overflow, we construct a large-scale query reformulation corpus, including the original queries and corresponding reformulated ones. Our approach trains a Transformer model that can automatically generate candidate reformulated queries when given the users original query. The evaluation results show that our approach outperforms five state-of-the-art baselines, and achieves a 5.6% to 33.5% boost in terms of $mathit{ExactMatch}$ and a 4.8% to 14.4% boost in terms of $mathit{GLEU}$.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا