ترغب بنشر مسار تعليمي؟ اضغط هنا

A Reason Maintenace System Dealing with Vague Data

38   0   0.0 ( 0 )
 نشر من قبل B. Fringuelli
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A reason maintenance system which extends an ATMS through Mukaidonos fuzzy logic is described. It supports a problem solver in situations affected by incomplete information and vague data, by allowing nonmonotonic inferences and the revision of previous conclusions when contradictions are detected.

قيم البحث

اقرأ أيضاً

We consider the problem of answering queries about formulas of first-order logic based on background knowledge partially represented explicitly as other formulas, and partially represented as examples independently drawn from a fixed probability dist ribution. PAC semantics, introduced by Valiant, is one rigorous, general proposal for learning to reason in formal languages: although weaker than classical entailment, it allows for a powerful model theoretic framework for answering queries while requiring minimal assumptions about the form of the distribution in question. To date, however, the most significant limitation of that approach, and more generally most machine learning approaches with robustness guarantees, is that the logical language is ultimately essentially propositional, with finitely many atoms. Indeed, the theoretical findings on the learning of relational theories in such generality have been resoundingly negative. This is despite the fact that first-order logic is widely argued to be most appropriate for representing human knowledge. In this work, we present a new theoretical approach to robustly learning to reason in first-order logic, and consider universally quantified clauses over a countably infinite domain. Our results exploit symmetries exhibited by constants in the language, and generalize the notion of implicit learnability to show how queries can be computed against (implicitly) learned first-order background knowledge.
116 - Haozheng Luo , Ruiyang Qin 2020
People with visual impairments urgently need helps, not only on the basic tasks such as guiding and retrieving objects , but on the advanced tasks like picturing the new environments. More than a guiding dog, they might want some devices which are ab le to provide linguistic interaction. Building on various research literature, we aim to conduct a research on the interaction between the robot agent and visual impaired people. The robot agent, applied VQA techniques, is able to analyze the environment, process and understand the pronouncing questions, and provide feedback to the human user. In this paper, we are going to discuss the related questions about this kind of interaction, the techniques we used in this work, and how we conduct our research.
The advent of wide-field sky surveys has led to the growth of transient and variable source discoveries. The data deluge produced by these surveys has necessitated the use of machine learning (ML) and deep learning (DL) algorithms to sift through the vast incoming data stream. A problem that arises in real-world applications of learning algorithms for classification is imbalanced data, where a class of objects within the data is underrepresented, leading to a bias for over-represented classes in the ML and DL classifiers. We present a recurrent neural network (RNN) classifier that takes in photometric time-series data and additional contextual information (such as distance to nearby galaxies and on-sky position) to produce real-time classification of objects observed by the Gravitational-wave Optical Transient Observer (GOTO), and use an algorithm-level approach for handling imbalance with a focal loss function. The classifier is able to achieve an Area Under the Curve (AUC) score of 0.972 when using all available photometric observations to classify variable stars, supernovae, and active galactic nuclei. The RNN architecture allows us to classify incomplete light curves, and measure how performance improves as more observations are included. We also investigate the role that contextual information plays in producing reliable object classification.
Missing data are a common problem in experimental and observational physics. They can be caused by various sources, either an instruments saturation, or a contamination from an external event, or a data loss. In particular, they can have a disastrous effect when one is seeking to characterize a colored-noise-dominated signal in Fourier space, since they create a spectral leakage that can artificially increase the noise. It is therefore important to either take them into account or to correct for them prior to e.g. a Least-Square fit of the signal to be characterized. In this paper, we present an application of the {it inpainting} algorithm to mock MICROSCOPE data; {it inpainting} is based on a sparsity assumption, and has already been used in various astrophysical contexts; MICROSCOPE is a French Space Agency mission, whose launch is expected in 2016, that aims to test the Weak Equivalence Principle down to the $10^{-15}$ level. We then explore the {it inpainting} dependence on the number of gaps and the total fraction of missing values. We show that, in a worst-case scenario, after reconstructing missing values with {it inpainting}, a Least-Square fit may allow us to significantly measure a $1.1times10^{-15}$ Equivalence Principle violation signal, which is sufficiently close to the MICROSCOPE requirements to implement {it inpainting} in the official MICROSCOPE data processing and analysis pipeline. Together with the previously published KARMA method, {it inpainting} will then allow us to independently characterize and cross-check an Equivalence Principle violation signal detection down to the $10^{-15}$ level.
Different ways of dealing with one-dimensional (1D) spectra, measured e.g. in the Compton scattering or angular correlation of positron annihilation radiation (ACAR) experiments are presented. On the example of divalent hexagonal close packed metals it is shown what kind of information on the electronic structure one can get from 1D profiles, interpreted in terms of either 2D or 3D momentum densities. 2D and 3D densities are reconstructed from merely two and seven 1D profiles, respectively. Applied reconstruction techniques are particular solutions of the Radon transform in terms of orthogonal Gegenabauer polynomials. We propose their modification connected with so-called two-step reconstruction. The analysis is performed both in the extended p and reduced k zone schemes. It is demonstrated that if positron wave function or many-body effects are strongly momentum dependent, analysis of 2D densities folded into k space may lead to wrong conclusions concerning the Fermi surface. In the case of 2D ACAR data in Mg we found very strong many-body effects. PACS numbers: 71.18.+y, 13.60.Fz, 87.59.Fm

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا