No Arabic abstract
We consider the following model of decision-making by cognitive systems. We present an algorithm -- quantum-like representation algorithm (QLRA) -- which provides a possibility to represent probabilistic data of any origin by complex probability amplitudes. Our conjecture is that cognitive systems developed the ability to use QLRA. They operate with complex probability amplitudes, mental wave functions. Since the mathematical formalism of QM describes as well (under some generalization) processing of such quantum-like (QL) mental states, the conventional quantum decision-making scheme can be used by the brain. We consider a modification of this scheme to describe decision-making in the presence of two ``incompatible mental variables. Such a QL decision-making can be used in situations like Prisoners Dilemma (PD) as well as others corresponding to so called disjunction effect in psychology and cognitive science.
A rigorous general definition of quantum probability is given, which is valid for elementary events and for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary.
Actionable Cognitive Twins are the next generation Digital Twins enhanced with cognitive capabilities through a knowledge graph and artificial intelligence models that provide insights and decision-making options to the users. The knowledge graph describes the domain-specific knowledge regarding entities and interrelationships related to a manufacturing setting. It also contains information on possible decision-making options that can assist decision-makers, such as planners or logisticians. In this paper, we propose a knowledge graph modeling approach to construct actionable cognitive twins for capturing specific knowledge related to demand forecasting and production planning in a manufacturing plant. The knowledge graph provides semantic descriptions and contextualization of the production lines and processes, including data identification and simulation or artificial intelligence algorithms and forecasts used to support them. Such semantics provide ground for inferencing, relating different knowledge types: creative, deductive, definitional, and inductive. To develop the knowledge graph models for describing the use case completely, systems thinking approach is proposed to design and verify the ontology, develop a knowledge graph and build an actionable cognitive twin. Finally, we evaluate our approach in two use cases developed for a European original equipment manufacturer related to the automotive industry as part of the European Horizon 2020 project FACTLOG.
Thompson sampling and other Bayesian sequential decision-making algorithms are among the most popular approaches to tackle explore/exploit trade-offs in (contextual) bandits. The choice of prior in these algorithms offers flexibility to encode domain knowledge but can also lead to poor performance when misspecified. In this paper, we demonstrate that performance degrades gracefully with misspecification. We prove that the expected reward accrued by Thompson sampling (TS) with a misspecified prior differs by at most $tilde{mathcal{O}}(H^2 epsilon)$ from TS with a well specified prior, where $epsilon$ is the total-variation distance between priors and $H$ is the learning horizon. Our bound does not require the prior to have any parametric form. For priors with bounded support, our bound is independent of the cardinality or structure of the action space, and we show that it is tight up to universal constants in the worst case. Building on our sensitivity analysis, we establish generic PAC guarantees for algorithms in the recently studied Bayesian meta-learning setting and derive corollaries for various families of priors. Our results generalize along two axes: (1) they apply to a broader family of Bayesian decision-making algorithms, including a Monte-Carlo implementation of the knowledge gradient algorithm (KG), and (2) they apply to Bayesian POMDPs, the most general Bayesian decision-making setting, encompassing contextual bandits as a special case. Through numerical simulations, we illustrate how prior misspecification and the deployment of one-step look-ahead (as in KG) can impact the convergence of meta-learning in multi-armed and contextual bandits with structured and correlated priors.
We present an experimental illustration on the quantum sensitivity of decision making machinery. In the decision making process, we consider the role of available information, say hint, whether it influences the optimal choices. To the end, we consider a machinery method of decision making in a probabilistic way. Our main result shows that in decision making process our quantum machine is more highly sensitive than its classical counterpart to the hints we categorize into good and poor. This quantum feature originates from the quantum superposition involved in the decision making process. We also show that the quantum sensitivity persists before the quantum superposition is completely destroyed.
A quantum vacuum, represented by a viscous fluid, is added to the Einstein vacuum, surrounding a spherical distribution of mass. This gives as a solution, in spherical coordinates, a Schwarzschild-like metric. The plot of g00 and g11 components of the metric, as a function of the radial coordinate, display the same qualitative behavior as that of the Schwarzschild metric. However, the temperature of the event horizon is equal to the Hawking temperature multiplied by a factor of two, while the entropy is equal to half of the Bekenstein one.