ترغب بنشر مسار تعليمي؟ اضغط هنا

Integration of Agile Ontology Mapping towards NLP Search in I-SOAS

151   0   0.0 ( 0 )
 نشر من قبل Zeeshan Ahmed Mr.
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this research paper we address the importance of Product Data Management (PDM) with respect to its contributions in industry. Moreover we also present some currently available major challenges to PDM communities and targeting some of these challenges we present an approach i.e. I-SOAS, and briefly discuss how this approach can be helpful in solving the PDM communitys faced problems. Furthermore, limiting the scope of this research to one challenge, we focus on the implementation of a semantic based search mechanism in PDM Systems. Going into the details, at first we describe the respective field i.e. Language Technology (LT), contributing towards natural language processing, to take advantage in implementing a search engine capable of understanding the semantic out of natural language based search queries. Then we discuss how can we practically take advantage of LT by implementing its concepts in the form of software application with the use of semantic web technology i.e. Ontology. Later, in the end of this research paper, we briefly present a prototype application developed with the use of concepts of LT towards semantic based search.



قيم البحث

اقرأ أيضاً

215 - Zeeshan Ahmed , Vasil Popov 2010
It is necessary to improve the concepts of the present web based graphical user interface for the development of more flexible and intelligent interface to provide ease and increase the level of comfort at user end like most of the desktop based appl ications. This research is conducted targeting the goal of implementing flexible GUI consisting of a visual component manager with different components by functionality, design and purpose. In this research paper we present a Rich Internet Application (RIA) based graphical user interface for web based product development, and going into the details we present a comparison between existing RIA Technologies, adopted methodology in the GUI development and developed prototype.
398 - Jin Yong Yoo , Yanjun Qi 2021
Adversarial training, a method for learning robust deep neural networks, constructs adversarial examples during training. However, recent methods for generating NLP adversarial examples involve combinatorial search and expensive sentence encoders for constraining the generated instances. As a result, it remains challenging to use vanilla adversarial training to improve NLP models performance, and the benefits are mainly uninvestigated. This paper proposes a simple and improved vanilla adversarial training process for NLP models, which we name Attacking to Training (A2T). The core part of A2T is a new and cheaper word substitution attack optimized for vanilla adversarial training. We use A2T to train BERT and RoBERTa models on IMDB, Rotten Tomatoes, Yelp, and SNLI datasets. Our results empirically show that it is possible to train robust NLP models using a much cheaper adversary. We demonstrate that vanilla adversarial training with A2T can improve an NLP models robustness to the attack it was originally trained with and also defend the model against other types of word substitution attacks. Furthermore, we show that A2T can improve NLP models standard accuracy, cross-domain generalization, and interpretability. Code is available at https://github.com/QData/Textattack-A2T .
The COVID-19 pandemic has had a significant impact on society, both because of the serious health effects of COVID-19 and because of public health measures implemented to slow its spread. Many of these difficulties are fundamentally information needs ; attempts to address these needs have caused an information overload for both researchers and the public. Natural language processing (NLP), the branch of artificial intelligence that interprets human language, can be applied to address many of the information needs made urgent by the COVID-19 pandemic. This review surveys approximately 150 NLP studies and more than 50 systems and datasets addressing the COVID-19 pandemic. We detail work on four core NLP tasks: information retrieval, named entity recognition, literature-based discovery, and question answering. We also describe work that directly addresses aspects of the pandemic through four additional tasks: topic modeling, sentiment and emotion analysis, caseload forecasting, and misinformation detection. We conclude by discussing observable trends and remaining challenges.
We propose a cascade of neural models that performs sentence classification, phrase recognition, and triple extraction to automatically structure the scholarly contributions of NLP publications. To identify the most important contribution sentences i n a paper, we used a BERT-based classifier with positional features (Subtask 1). A BERT-CRF model was used to recognize and characterize relevant phrases in contribution sentences (Subtask 2). We categorized the triples into several types based on whether and how their elements were expressed in text, and addressed each type using separate BERT-based classifiers as well as rules (Subtask 3). Our system was officially ranked second in Phase 1 evaluation and first in both parts of Phase 2 evaluation. After fixing a submission error in Pharse 1, our approach yields the best results overall. In this paper, in addition to a system description, we also provide further analysis of our results, highlighting its strengths and limitations. We make our code publicly available at https://github.com/Liu-Hy/nlp-contrib-graph.
Performance prediction, the task of estimating a systems performance without performing experiments, allows us to reduce the experimental burden caused by the combinatorial explosion of different datasets, languages, tasks, and models. In this paper, we make two contributions to improving performance prediction for NLP tasks. First, we examine performance predictors not only for holistic measures of accuracy like F1 or BLEU but also fine-grained performance measures such as accuracy over individual classes of examples. Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration. We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future. We make our code publicly available: url{https://github.com/neulab/Reliable-NLPPP}
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا