Do you want to publish a course? Click here

Description of the data in the university domain using semantic web technologies

توصيف البيانات في مجال الجامعة باستخدام تقنيات الويب الدلالي

1767   0   36   0 ( 0 )
 Publication date 2017
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

In the few recent years, besides the traditional web a new web has appeared. It is called the Web of Linked Data. It has been developed to present data in a machinereadable form. The main idea is to describe data using a set of terms called web ontology. At this time, tools and standards related to the semantic web are becoming comprehensive and stable; however, publishing university data as linked data still faces some major challenges. First of all, there is no unified, well-accepted vocabulary for describing university-related information. This article aims to find the ontology which could be used to describe the data in the university domain, so it could be possible to integrate this data with data from other universities and do queries on it. The web ontology was built by reusing the available vocabularies on the web and adding new classes and properties. The ontology has been organized by using Protégé.

References used
SARKAR, A.; MARJIT, U.; BISWAS, U. Linked data generation for the university data from legacy database. International Journal of Web & Semantic Technology (IJWesT) India, Vol.2 No.3, 2011, 11
DILLON, T.S.; CHANG, E.; WONGTHONGTHOM, P. Ontology-Based Software Engineering—Software Engineering 2.0. IEEE Computer Society USA, 2008, 13-23
MA, Y.; XU, B.; BAI, Y.; ZONGHUI, L. Building Linked Open University Data - Tsinghua University Open Data as a showcase. Proceeding JIST, 2011, 385-393
rate research

Read More

We aimed to distinguish between them and the other research areas such as information retrieval and data mining. we tried to determine the general structure of such systems which form a part of larger systems that have a mission to answer user querie s based on the extracted information. we reviewed the different types of these systems, used techniques with them and tried to define the current and future challenges and the consequent research problems. Finally we tried to discuss the details of the various implementations of these systems by explaining two platforms Gate and OpenCalais and comparing between their information extraction systems and discuss the results.
Semantic Web is a new revolution in the world of the Web, where information and data become viable for logical processing by computer programs. Where they are transformed into meaningful data network. Although Semantic Web is considered the future of World Wide Web, the Arabic research and studies are still relatively rare in this field. Therefore, this paper gives a reference study of Semantic Web and the different methods to explore the knowledge and discover useful information from the vast amount of data provided by the web. It gives a programming example like application of some of these techniques provided by the Semantic Web and methods to discover the knowledge of it. This simplified programming example provides services related to higher education Syrian government, such as information about the Syrian public universities like the name of the university (Syrian Virtual University, Tishreen, Aleppo, Damascus, and Al Baath), address of the university, its web site, number of students and a summary of the university, which helps intelligent agents to find those services dynamically.
Synthesizing data for semantic parsing has gained increasing attention recently. However, most methods require handcrafted (high-precision) rules in their generative process, hindering the exploration of diverse unseen data. In this work, we propose a generative model which features a (non-neural) PCFG that models the composition of programs (e.g., SQL), and a BART-based translation model that maps a program to an utterance. Due to the simplicity of PCFG and pre-trained BART, our generative model can be efficiently learned from existing data at hand. Moreover, explicitly modeling compositions using PCFG leads to better exploration of unseen programs, thus generate more diverse data. We evaluate our method in both in-domain and out-of-domain settings of text-to-SQL parsing on the standard benchmarks of GeoQuery and Spider, respectively. Our empirical results show that the synthesized data generated from our model can substantially help a semantic parser achieve better compositional and domain generalization.
Web Engineering Methodologies (WebML UWE Hera RMM) support the representation and modeling of web services in a lifecycle, based on service oriented architecture (SOA). Theses methodologies, however, vary in supporting semantic web components and s emantic web services (SWS). In this work, we present a general comparison between different web engineering methodologies with special attention to semantic web components modeling and we track the weaknesses of common web engineering methodologies in modeling semantic web services. This work presents also an extension to WebML methodology where symbols, diagrams and notions are added to support the modeling of semantic web services according to DAML-S framework (DARPA agent markup language for services). Additionally, a software tool was built to support this extension and generate ontologies of semantic web services automatically based on new diagrams. This tool also supports matching with semantic ranking between semantic web services advertisements and client's requests.
Reliable tagging of Temporal Expressions (TEs, e.g., Book a table at L'Osteria for Sunday evening) is a central requirement for Voice Assistants (VAs). However, there is a dearth of resources and systems for the VA domain, since publicly-available te mporal taggers are trained only on substantially different domains, such as news and clinical text. Since the cost of annotating large datasets is prohibitive, we investigate the trade-off between in-domain data and performance in DA-Time, a hybrid temporal tagger for the English VA domain which combines a neural architecture for robust TE recognition, with a parser-based TE normalizer. We find that transfer learning goes a long way even with as little as 25 in-domain sentences: DA-Time performs at the state of the art on the news domain, and substantially outperforms it on the VA domain.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا