Do you want to publish a course? Click here

Improving Arabic Information Retrieval Results Semantically Using Ontology

تحسين نتائج استرجاع المعلومات العربية دلالياً باستخدام الأنتولوجيا

1993   0   32   0 ( 0 )
 Publication date 2016
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

This research proposes a new way to improve the search outcome of Arabic semantics by abstractly summarizing the Arabic texts (Abstractive Summary) using natural language processing algorithms(NLP),Word Sense Disambiguation (WSD) and techniques of measuring Semantic Similarity in Arabic WordNet Ontology.

References used
Luhn, P., 1958, The Automatic Creation of Literature Abstracts, IBM Journal, April
Salton, G., MichaelJ., 1986, Introduction to Modern Information Retrieval, McGrawHill, Inc., New York, NY, USA
I.Fathy, D., Aref, M., 2012, Rich Semantic Representation Based Approach for Text Generation, The 8th International conference on Informatics and systems
rate research

Read More

تحتل الدراسات التي تتناول حوسبة اللغة العربية أهمية كبيرة نظراً للانتشار الواسع للغة العربية , و اخترنا في هذه الدراسة العمل على معالجة اللغة العربية من خلال نظام استرجاع معلومات للمستندات باللغة العربية , الفكرة الأساسية لهذا النظام هو تحليل المستن دات والنصوص العربية و إنشاء فهارس للمصطلحات الواردة فيها , ومن ثم استخلاص أشعة أوزان تعبر عن هذه المستندات من أجل المعالجة اللاحقة للاستعلام و المقارنة مع هذه الأشعة للحصول على المستندات الموافقة لهذا الاستعلام . من خلال عملية تجريد للمصطلحات الواردة في المستندات تم الحصول على كفاءة استرجاع أفضل , و تعرضنا للعديد من خوارزميات التجريد التي وصلت إليها الدراسات السابقة . و تأتي عملية عنقدة المستندات كإضافة هامة , حيث يتمكن المستخدم من معرفة المستندات المشابهة لنتيجة البحث و التي لها صلة بـالاستعلام المدخل . في التطبيق العملي , تم العمل على نظام استرجاع معلومات مكتبي , يقوم بقراءة نصوص ذات أنواع مختلفة و عرض النتائج مع العناقيد الموافقة لها .
Introducing biomedical informatics (BMI) students to natural language processing (NLP) requires balancing technical depth with practical know-how to address application-focused needs. We developed a set of three activities introducing introductory BM I students to information retrieval with NLP, covering document representation strategies and language models from TF-IDF to BERT. These activities provide students with hands-on experience targeted towards common use cases, and introduce fundamental components of NLP workflows for a wide variety of applications.
Claim verification is challenging because it requires first to find textual evidence and then apply claim-evidence entailment to verify a claim. Previous works evaluate the entailment step based on the retrieved evidence, whereas we hypothesize that the entailment prediction can provide useful signals for evidence retrieval, in the sense that if a sentence supports or refutes a claim, the sentence must be relevant. We propose a novel model that uses the entailment score to express the relevancy. Our experiments verify that leveraging entailment prediction improves ranking multiple pieces of evidence.
Automated fact-checking on a large-scale is a challenging task that has not been studied systematically until recently. Large noisy document collections like the web or news articles make the task more difficult. We describe a three-stage automated f act-checking system, named Quin+, using evidence retrieval and selection methods. We demonstrate that using dense passage representations leads to much higher evidence recall in a noisy setting. We also propose two sentence selection approaches, an embedding-based selection using a dense retrieval model, and a sequence labeling approach for context-aware selection. Quin+ is able to verify open-domain claims using results from web search engines.
Knowledge graphs (KGs) are widely used to store and access information about entities and their relationships. Given a query, the task of entity retrieval from a KG aims at presenting a ranked list of entities relevant to the query. Lately, an increa sing number of models for entity retrieval have shown a significant improvement over traditional methods. These models, however, were developed for English KGs. In this work, we build on one such system, named KEWER, to propose SERAG (Semantic Entity Retrieval from Arabic knowledge Graphs). Like KEWER, SERAG uses random walks to generate entity embeddings. DBpedia-Entity v2 is considered the standard test collection for entity retrieval. We discuss the challenges of using it for non-English languages in general and Arabic in particular. We provide an Arabic version of this standard collection, and use it to evaluate SERAG. SERAG is shown to significantly outperform the popular BM25 model thanks to its multi-hop reasoning.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا