ترغب بنشر مسار تعليمي؟ اضغط هنا

TruthDiscover: Resolving Object Conflicts on Massive Linked Data

89   0   0.0 ( 0 )
 نشر من قبل Wenqiang Liu
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Considerable effort has been made to increase the scale of Linked Data. However, because of the openness of the Semantic Web and the ease of extracting Linked Data from semi-structured sources (e.g., Wikipedia) and unstructured sources, many Linked Data sources often provide conflicting objects for a certain predicate of a real-world entity. Existing methods cannot be trivially extended to resolve conflicts in Linked Data because Linked Data has a scale-free property. In this demonstration, we present a novel system called TruthDiscover, to identify the truth in Linked Data with a scale-free property. First, TruthDiscover leverages the topological properties of the Source Belief Graph to estimate the priori beliefs of sources, which are utilized to smooth the trustworthiness of sources. Second, the Hidden Markov Random Field is utilized to model interdependencies among objects for estimating the trust values of objects accurately. TruthDiscover can visualize the process of resolving conflicts in Linked Data. Experiments results on four datasets show that TruthDiscover exhibits satisfactory accuracy when confronted with data having a scale-free property.

قيم البحث

اقرأ أيضاً

Considerable effort has been made to increase the scale of Linked Data. However, an inevitable problem when dealing with data integration from multiple sources is that multiple different sources often provide conflicting objects for a certain predica te of the same real-world entity, so-called object conflicts problem. Currently, the object conflicts problem has not received sufficient attention in the Linked Data community. In this paper, we first formalize the object conflicts resolution problem as computing the joint distribution of variables on a heterogeneous information network called the Source-Object Network, which successfully captures the all correlations from objects and Linked Data sources. Then, we introduce a novel approach based on network effects called ObResolution(Object Resolution), to identify a true object from multiple conflicting objects. ObResolution adopts a pairwise Markov Random Field (pMRF) to model all evidences under a unified framework. Extensive experimental results on six real-world datasets show that our method exhibits higher accuracy than existing approaches and it is robust and consistent in various domains. keywords{Linked Data, Object Conflicts, Linked Data Quality, Truth Discovery
Linked Open Data (LOD) is the publicly available RDF data in the Web. Each LOD entity is identfied by a URI and accessible via HTTP. LOD encodes globalscale knowledge potentially available to any human as well as artificial intelligence that may want to benefit from it as background knowledge for supporting their tasks. LOD has emerged as the backbone of applications in diverse fields such as Natural Language Processing, Information Retrieval, Computer Vision, Speech Recognition, and many more. Nevertheless, regardless of the specific tasks that LOD-based tools aim to address, the reuse of such knowledge may be challenging for diverse reasons, e.g. semantic heterogeneity, provenance, and data quality. As aptly stated by Heath et al. Linked Data might be outdated, imprecise, or simply wrong: there arouses a necessity to investigate the problem of linked data validity. This work reports a collaborative effort performed by nine teams of students, guided by an equal number of senior researchers, attending the International Semantic Web Research School (ISWS 2018) towards addressing such investigation from different perspectives coupled with different approaches to tackle the issue.
Object detectors are typically learned based on fully-annotated training data with fixed pre-defined categories. However, not all possible categories of interest can be known beforehand, classes are often required to be increased progressively in man y realistic applications. In such scenario, only the original training set annotated with the old classes and some new training data labeled with the new classes are available. Based on the limited datasets without extra manual labor, a unified detector that can handle all categories is strongly needed. Plain joint training leads to heavy biases and poor performance due to the incomplete annotations. To avoid such situation, we propose a practical framework in this paper. A conflict-free loss is designed to avoid label ambiguity, leading to an acceptable detector in one training round. To further improve performance, we propose a retraining phase in which Monte Carlo Dropout is employed to calculate the localization confidence, combined with the classification confidence, to mine more accurate bounding boxes, and an overlap-weighted method is employed for making better use of pseudo annotations during retraining to achieve more powerful detectors. Extensive experiments conducted on multiple datasets demonstrate the effectiveness of our framework for category-extended object detectors.
The distribution of string tension on the contact line between an ideal string and a massive pulley is a frequently-discussed but incompletely-posed problem that confronts students in introductory mechanics. We highlight ambiguities in the usual pres entation of this problem by the massive Atwoods machine and discuss two compact resolutions that treat situations where the pulley or the string elastically deform. We propose experiments that can be developed in an intermediate laboratory to determine the tension profile.
Graph data models have recently become popular owing to their applications, e.g., in social networks and the semantic web. Typical navigational query languages over graph databases - such as Conjunctive Regular Path Queries (CRPQs) - cannot express r elevant properties of the interaction between the underlying data and the topology. Two languages have been recently proposed to overcome this problem: walk logic (WL) and regular expressions with memory (REM). In this paper, we begin by investigating fundamental properties of WL and REM, i.e., complexity of evaluation problems and expressive power. We first show that the data complexity of WL is nonelementary, which rules out its practicality. On the other hand, while REM has low data complexity, we point out that many natural data/topology properties of graphs expressible in WL cannot be expressed in REM. To this end, we propose register logic, an extension of REM, which we show to be able to express many natural graph properties expressible in WL, while at the same time preserving the elementariness of data complexity of REMs. It is also incomparable to WL in terms of expressive power.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا