Do you want to publish a course? Click here

A Negation Quantum Decision Model to Predict the Interference Effect in Categorization

289   0   0.0 ( 0 )
 Added by Qinyuan Wu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Categorization is a significant task in decision-making, which is a key part of human behavior. An interference effect is caused by categorization in some cases, which breaks the total probability principle. A negation quantum model (NQ model) is developed in this article to predict the interference. Taking the advantage of negation to bring more information in the distribution from a different perspective, the proposed model is a combination of the negation of a probability distribution and the quantum decision model. Information of the phase contained in quantum probability and the special calculation method to it can easily represented the interference effect. The results of the proposed NQ model is closely to the real experiment data and has less error than the existed models.

rate research

Read More

113 - Xiaobo Liu , Su Yang 2021
Objectives: Functional connectivity triggered by naturalistic stimulus (e.g., movies) and machine learning techniques provide a great insight in exploring the brain functions such as fluid intelligence. However, functional connectivity are considered to be multi-layered, while traditional machine learning based on individual models not only are limited in performance, but also fail to extract multi-dimensional and multi-layered information from brain network. Methods: In this study, inspired by multi-layer brain network structure, we propose a new method namely Weighted Ensemble-model and Network Analysis, which combines the machine learning and graph theory for improved fluid intelligence prediction. Firstly, functional connectivity analysis and graphical theory were jointly employed. The functional connectivity and graphical indices computed using the preprocessed fMRI data were then all fed into auto-encoder parallelly for feature extraction to predict the fluid intelligence. In order to improve the performance, tree regression and ridge regression model were automatically stacked and fused with weighted values. Finally, layers of auto-encoder were visualized to better illustrate the connectome patterns, followed by the evaluation of the performance to justify the mechanism of brain functions. Results: Our proposed methods achieved best performance with 3.85 mean absolute deviation, 0.66 correlation coefficient and 0.42 R-squared coefficient, outperformed other state-of-the-art methods. It is also worth noting that, the optimization of the biological pattern extraction was automated though the auto-encoder algorithm. Conclusion: The proposed method not only outperforming the state-of-the-art reports, but also able to effectively capturing the biological patterns from functional connectivity during naturalistic movies state for potential clinical explorations.
In this paper we study the uses and the semantics of non-monotonic negation in probabilistic deductive data bases. Based on the stable semantics for classical logic programming, we introduce the notion of stable formula, functions. We show that stable formula, functions are minimal fixpoints of operators associated with probabilistic deductive databases with negation. Furthermore, since a. probabilistic deductive database may not necessarily have a stable formula function, we provide a stable class semantics for such databases. Finally, we demonstrate that the proposed semantics can handle default reasoning naturally in the context of probabilistic deduction.
Levy walks are found in the migratory behaviour patterns of various organisms, and the reason for this phenomenon has been much discussed. We use simulations to demonstrate that learning causes the changes in confidence level during decision-making in non-stationary environments, and results in Levy-walk-like patterns. One inference algorithm involving confidence is Bayesian inference. We propose an algorithm that introduces the effects of learning and forgetting into Bayesian inference, and simulate an imitation game in which two decision-making agents incorporating the algorithm estimate each others internal models from their opponents observational data. For forgetting without learning, agent confidence levels remained low due to a lack of information on the counterpart and Brownian walks occurred for a wide range of forgetting rates. Conversely, when learning was introduced, high confidence levels occasionally occurred even at high forgetting rates, and Brownian walks universally became Levy walks through a mixture of high- and low-confidence states.
Belief function theory provides a flexible way to combine information provided by different sources. This combination is usually followed by a decision making which can be handled by a range of decision rules. Some rules help to choose the most likely hypothesis. Others allow that a decision is made on a set of hypotheses. In [6], we proposed a decision rule based on a distance measure. First, in this paper, we aim to demonstrate that our proposed decision rule is a particular case of the rule proposed in [4]. Second, we give experiments showing that our rule is able to decide on a set of hypotheses. Some experiments are handled on a set of mass functions generated randomly, others on real databases.
We report experiment results on binary categorization of (i) gray color, (ii) speech sounds, and (iii) number discrimination. Data analysis is based on constructing psychometric functions and focusing on asymptotics. We discuss the transitions between two types of subjects response to stimuli presented for two-category classification, e.g., visualized shade of gray into light-gray or dark-gray. Response types are (i) the conscious choice of non-dominant category, described by the deep tails of psychometric function, and (ii) subjects physical errors in recording decisions in cases where the category choice is obvious. Explanation of results is based on the concept of dual-system decision making. When the choice is obvious, System 1 (fast and automatic) determines subjects actions, with higher probability of physical errors than when subjects decision-making is based on slow, deliberate analysis (System 2). Results provide possible evidence for hotly debated dual-system theories of cognitive phenomena.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا