Do you want to publish a course? Click here

The State of the Art: Ontology Web-Based Languages: XML Based

164   0   0.0 ( 0 )
 Added by William Jackson
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

Many formal languages have been proposed to express or represent Ontologies, including RDF, RDFS, DAML+OIL and OWL. Most of these languages are based on XML syntax, but with various terminologies and expressiveness. Therefore, choosing a language for building an Ontology is the main step. The main point of choosing language to represent Ontology is based mainly on what the Ontology will represent or be used for. That language should have a range of quality support features such as ease of use, expressive power, compatibility, sharing and versioning, internationalisation. This is because different kinds of knowledge-based applications need different language features. The main objective of these languages is to add semantics to the existing information on the web. The aims of this paper is to provide a good knowledge of existing language and understanding of these languages and how could be used.



rate research

Read More

Since its emergence in the 1990s the World Wide Web (WWW) has rapidly evolved into a huge mine of global information and it is growing in size everyday. The presence of huge amount of resources on the Web thus poses a serious problem of accurate search. This is mainly because todays Web is a human-readable Web where information cannot be easily processed by machine. Highly sophisticated, efficient keyword based search engines that have evolved today have not been able to bridge this gap. So comes up the concept of the Semantic Web which is envisioned by Tim Berners-Lee as the Web of machine interpretable information to make a machine processable form for expressing information. Based on the semantic Web technologies we present in this paper the design methodology and development of a semantic Web search engine which provides exact search results for a domain specific search. This search engine is developed for an agricultural Website which hosts agricultural information about the state of West Bengal.
Big data solutions are designed to cope with data of huge Volume and wide Variety, that need to be ingested at high Velocity and have potential Veracity issues, challenging characteristics that are usually referred to as the 4Vs of Big Data. In order to evaluate possibly complex big data solutions, stress tests require to assess a large number of combinations of sub-components jointly with the possible big data variations. A formalization of the Design of Experiments (DoE) on big data solutions is aimed at ensuring the reproducibility of the experiments, facilitating their partitioning in sub-experiments and guaranteeing the consistency of their outcomes in a global assessment. In this paper, an ontology-based approach is proposed to support the evaluation of a big data system in two ways. Firstly, the approach formalizes a decomposition and recombination of the big data solution, allowing for the aggregation of component evaluation results at inter-component level. Secondly, existing work on DoE is translated into an ontology for supporting the selection of experiments. The proposed ontology-based approach offers the possibility to combine knowledge from the evaluation domain and the application domain. It exploits domain and inter-domain specific restrictions on the factor combinations in order to reduce the number of experiments. Contrary to existing approaches, the proposed use of ontologies is not limited to the assertional description and exploitation of past experiments but offers richer terminological descriptions for the development of a DoE from scratch. As an application example, a maritime big data solution to the problem of detecting and predicting vessel suspicious behaviour through mobility analysis is selected. The article is concluded with a sketch of future works.
Populating ontology graphs represents a long-standing problem for the Semantic Web community. Recent advances in translation-based graph embedding methods for populating instance-level knowledge graphs lead to promising new approaching for the ontology population problem. However, unlike instance-level graphs, the majority of relation facts in ontology graphs come with comprehensive semantic relations, which often include the properties of transitivity and symmetry, as well as hierarchical relations. These comprehensive relations are often too complex for existing graph embedding methods, and direct application of such methods is not feasible. Hence, we propose On2Vec, a novel translation-based graph embedding method for ontology population. On2Vec integrates two model components that effectively characterize comprehensive relation facts in ontology graphs. The first is the Component-specific Model that encodes concepts and relations into low-dimensional embedding spaces without a loss of relational properties; the second is the Hierarchy Model that performs focused learning of hierarchical relation facts. Experiments on several well-known ontology graphs demonstrate the promising capabilities of On2Vec in predicting and verifying new relation facts. These promising results also make possible significant improvements in related methods.
Considering the high heterogeneity of the ontologies pub-lished on the web, ontology matching is a crucial issue whose aim is to establish links between an entity of a source ontology and one or several entities from a target ontology. Perfectible similarity measures, consid-ered as sources of information, are combined to establish these links. The theory of belief functions is a powerful mathematical tool for combining such uncertain information. In this paper, we introduce a decision pro-cess based on a distance measure to identify the best possible matching entities for a given source entity.
The development of artificial general intelligence is considered by many to be inevitable. What such intelligence does after becoming aware is not so certain. To that end, research suggests that the likelihood of artificial general intelligence becoming hostile to humans is significant enough to warrant inquiry into methods to limit such potential. Thus, containment of artificial general intelligence is a timely and meaningful research topic. While there is limited research exploring possible containment strategies, such work is bounded by the underlying field the strategies draw upon. Accordingly, we set out to construct an ontology to describe necessary elements in any future containment technology. Using existing academic literature, we developed a single domain ontology containing five levels, 32 codes, and 32 associated descriptors. Further, we constructed ontology diagrams to demonstrate intended relationships. We then identified humans, AGI, and the cyber world as novel agent objects necessary for future containment activities. Collectively, the work addresses three critical gaps: (a) identifying and arranging fundamental constructs; (b) situating AGI containment within cyber science; and (c) developing scientific rigor within the field.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا