No Arabic abstract
Ontologies usually suffer from the semantic heterogeneity when simultaneously used in information sharing, merging, integrating and querying processes. Therefore, the similarity identification between ontologies being used becomes a mandatory task for all these processes to handle the problem of semantic heterogeneity. In this paper, we propose an efficient technique for similarity measurement between two ontologies. The proposed technique identifies all candidate pairs of similar concepts without omitting any similar pair. The proposed technique can be used in different types of operations on ontologies such as merging, mapping and aligning. By analyzing its results a reasonable improvement in terms of completeness, correctness and overall quality of the results has been found.
We investigate the problem whether two ALC ontologies are indistinguishable (or inseparable) by means of queries in a given signature, which is fundamental for ontology engineering tasks such as ontology versioning, modularisation, update, and forgetting. We consider both knowledge base (KB) and TBox inseparability. For KBs, we give model-theoretic criteria in terms of (finite partial) homomorphisms and products and prove that this problem is undecidable for conjunctive queries (CQs), but 2ExpTime-complete for unions of CQs (UCQs). The same results hold if (U)CQs are replaced by rooted (U)CQs, where every variable is connected to an answer variable. We also show that inseparability by CQs is still undecidable if one KB is given in the lightweight DL EL and if no restrictions are imposed on the signature of the CQs. We also consider the problem whether two ALC TBoxes give the same answers to any query over any ABox in a given signature and show that, for CQs, this problem is undecidable, too. We then develop model-theoretic criteria for Horn-ALC TBoxes and show using tree automata that, in contrast, inseparability becomes decidable and 2ExpTime-complete, even ExpTime-complete when restricted to (unions of) rooted CQs.
The notions of distance and similarity play a key role in many machine learning approaches, and artificial intelligence (AI) in general, since they can serve as an organizing principle by which individuals classify objects, form concepts and make generalizations. While distance functions for propositional representations have been thoroughly studied, work on distance functions for structured representations, such as graphs, frames or logical clauses, has been carried out in different communities and is much less understood. Specifically, a significant amount of work that requires the use of a distance or similarity function for structured representations of data usually employs ad-hoc functions for specific applications. Therefore, the goal of this paper is to provide an overview of this work to identify connections between the work carried out in different areas and point out directions for future work.
The need for domain ontologies in mission critical applications such as risk management and hazard identification is becoming more and more pressing. Most research on ontology learning conducted in the academia remains unrealistic for real-world applications. One of the main problems is the dependence on non-incremental, rare knowledge and textual resources, and manually-crafted patterns and rules. This paper reports work in progress aiming to address such undesirable dependencies during ontology construction. Initial experiments using a working prototype of the system revealed promising potentials in automatically constructing high-quality domain ontologies using real-world texts.
Semantic Web is actually an extension of the current one in that it represents information more meaningfully for humans and computers alike. It enables the description of contents and services in machine-readable form, and enables annotating, discovering, publishing, advertising and composing services to be automated. It was developed based on Ontology, which is considered as the backbone of the Semantic Web. In other words, the current Web is transformed from being machine-readable to machine-understandable. In fact, Ontology is a key technique with which to annotate semantics and provide a common, comprehensible foundation for resources on the Semantic Web. Moreover, Ontology can provide a common vocabulary, a grammar for publishing data, and can supply a semantic description of data which can be used to preserve the Ontologies and keep them ready for inference. This paper provides basic concepts of web services and the Semantic Web, defines the structure and the main applications of ontology, and provides many relevant terms are explained in order to provide a basic understanding of ontologies.
One of the most common mistakes made when performing data analysis is attributing causal meaning to regression coefficients. Formally, a causal effect can only be computed if it is identifiable from a combination of observational data and structural knowledge about the domain under investigation (Pearl, 2000, Ch. 5). Building on the literature of instrumental variables (IVs), a plethora of methods has been developed to identify causal effects in linear systems. Almost invariably, however, the most powerful such methods rely on exponential-time procedures. In this paper, we investigate graphical conditions to allow efficient identification in arbitrary linear structural causal models (SCMs). In particular, we develop a method to efficiently find unconditioned instrumental subsets, which are generalizations of IVs that can be used to tame the complexity of many canonical algorithms found in the literature. Further, we prove that determining whether an effect can be identified with TSID (Weihs et al., 2017), a method more powerful than unconditioned instrumental sets and other efficient identification algorithms, is NP-Complete. Finally, building on the idea of flow constraints, we introduce a new and efficient criterion called Instrumental Cutsets (IC), which is able to solve for parameters missed by all other existing polynomial-time algorithms.