ترغب بنشر مسار تعليمي؟ اضغط هنا

Evaluation and exploitation of knowledge robustness in knowledge-based systems

66   0   0.0 ( 0 )
 نشر من قبل Michel Martinez
 تاريخ النشر 2008
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف M. Barcikowski




اسأل ChatGPT حول البحث

Industrial knowledge is complex, difficult to formalize and very dynamic in reason of the continuous development of techniques and technologies. The verification of the validity of the knowledge base at the time of its elaboration is not sufficient. To be exploitable, this knowledge must then be able to be used under conditions (slightly) different from the conditions in which it was formalized. So, it becomes vital for the company to permanently evaluate the quality of the industrial knowledge implemented in the system. This evaluation is founded on the concept of robustness of the knowledge formalized by conceptual graphs. The evaluation method is supported by a computerized tool.



قيم البحث

اقرأ أيضاً

We propose a simple and effective method for machine translation evaluation which does not require reference translations. Our approach is based on (1) grounding the entity mentions found in each source sentence and candidate translation against a la rge-scale multilingual knowledge base, and (2) measuring the recall of the grounded entities found in the candidate vs. those found in the source. Our approach achieves the highest correlation with human judgements on 9 out of the 18 language pairs from the WMT19 benchmark for evaluation without references, which is the largest number of wins for a single evaluation method on this task. On 4 language pairs, we also achieve higher correlation with human judgements than BLEU. To foster further research, we release a dataset containing 1.8 million grounded entity mentions across 18 language pairs from the WMT19 metrics track data.
For decades, researchers in fields, such as the natural and social sciences, have been verifying causal relationships and investigating hypotheses that are now well-established or understood as truth. These causal mechanisms are properties of the nat ural world, and thus are invariant conditions regardless of the collection domain or environment. We show in this paper how prior knowledge in the form of a causal graph can be utilized to guide model selection, i.e., to identify from a set of trained networks the models that are the most robust and invariant to unseen domains. Our method incorporates prior knowledge (which can be incomplete) as a Structural Causal Model (SCM) and calculates a score based on the likelihood of the SCM given the target predictions of a candidate model and the provided input variables. We show on both publicly available and synthetic datasets that our method is able to identify more robust models in terms of generalizability to unseen out-of-distribution test examples and domains where covariates have shifted.
116 - J. Xu 2012
In the global competition, companies are propelled by an immense pressure to innovate. The trend to produce more new knowledge-intensive products or services and the rapid progress of information technologies arouse huge interest on knowledge managem ent for innovation. However the strategy of knowledge management is not widely adopted for innovation in industries due to a lack of an effective approach of their integration. This study aims to help the designers to innovate more efficiently based on an integrated approach of knowledge management. Based on this integrated approach, a prototype of distributed knowledge management system for innovation is developed. An industrial application is presented and its initial results indicate the applicability of the approach and the prototype in practice.
Knowledge distillation (KD) has enabled remarkable progress in model compression and knowledge transfer. However, KD requires a large volume of original data or their representation statistics that are not usually available in practice. Data-free KD has recently been proposed to resolve this problem, wherein teacher and student models are fed by a synthetic sample generator trained from the teacher. Nonetheless, existing data-free KD methods rely on fine-tuning of weights to balance multiple losses, and ignore the diversity of generated samples, resulting in limited accuracy and robustness. To overcome this challenge, we propose robustness and diversity seeking data-free KD (RDSKD) in this paper. The generator loss function is crafted to produce samples with high authenticity, class diversity, and inter-sample diversity. Without real data, the objectives of seeking high sample authenticity and class diversity often conflict with each other, causing frequent loss fluctuations. We mitigate this by exponentially penalizing loss increments. With MNIST, CIFAR-10, and SVHN datasets, our experiments show that RDSKD achieves higher accuracy with more robustness over different hyperparameter settings, compared to other data-free KD methods such as DAFL, MSKD, ZSKD, and DeepInversion.
193 - Zhiruo Wang , Renfen Hu 2020
Recent NLP tasks have benefited a lot from pre-trained language models (LM) since they are able to encode knowledge of various aspects. However, current LM evaluations focus on downstream performance, hence lack to comprehensively inspect in which as pect and to what extent have they encoded knowledge. This paper addresses both queries by proposing four tasks on syntactic, semantic, commonsense, and factual knowledge, aggregating to a total of $39,308$ questions covering both linguistic and world knowledge in Chinese. Throughout experiments, our probes and knowledge data prove to be a reliable benchmark for evaluating pre-trained Chinese LMs. Our work is publicly available at https://github.com/ZhiruoWang/ChnEval.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا