No Arabic abstract
Nature is in constant flux, so animals must account for changes in their environment when making decisions. How animals learn the timescale of such changes and adapt their decision strategies accordingly is not well understood. Recent psychophysical experiments have shown humans and other animals can achieve near-optimal performance at two alternative forced choice (2AFC) tasks in dynamically changing environments. Characterization of performance requires the derivation and analysis of computational models of optimal decision-making policies on such tasks. We review recent theoretical work in this area, and discuss how models compare with subjects behavior in tasks where the correct choice or evidence quality changes in dynamic, but predictable, ways.
What happen in the brain when human beings play games with computers? Here a simple zero-sum game was conducted to investigate how people make decision via their brain even they know that their opponent is a computer. There are two choices (a low or high number) for people and also two strategies for the computer (red color or green color). When the number selected by the human subject meet the red color, the person loses the score which is equal to the number. On the contrary, the person gains the number of score if the computer chooses a green color for the number selected by the human being. Both the human subject and the computer give their choice at the same time, and subjects have been told that the computer make its decision randomly on the red color or green color. During the experiments, the signal of electroencephalograph (EEG) obtained from brain of subjects was recorded. From the analysis of EEG, we find that people mind the loss more than the gain, and the phenomenon becoming obvious when the gap between loss and gain grows. In addition, the signal of EEG is clearly distinguishable before making different decisions. It is observed that significant negative waves in the entire brain region when the participant has a greater expectation for the outcome, and these negative waves are mainly concentrated in the forebrain region in the brain of human beings.
Decision making for dynamic systems is challenging due to the scale and dynamicity of such systems, and it is comprised of decisions at strategic, tactical, and operational levels. One of the most important aspects of decision making is incorporating real time information that reflects immediate status of the system. This type of decision making, which may apply to any dynamic system, needs to comply with the systems current capabilities and calls for a dynamic data driven planning framework. Performance of dynamic data driven planning frameworks relies on the decision making process which in return is relevant to the quality of the available data. This means that the planning framework should be able to set the level of decision making based on the current status of the system, which is learned through the continuous readings of sensory data. In this work, a Markov chain Monte Carlo sampling method is proposed to determine the optimal fidelity of decision making in a dynamic data driven framework. To evaluate the performance of the proposed method, an experiment is conducted, where the impact of workers performance on the production capacity and the fidelity level of decision making are studied.
Training individuals to make accurate decisions from medical images is a critical component of education in diagnostic pathology. We describe a joint experimental and computational modeling approach to examine the similarities and differences in the cognitive processes of novice participants and experienced participants (pathology residents and pathology faculty) in cancer cell image identification. For this study we collected a bank of hundreds of digital images that were identified by cell type and classified by difficulty by a panel of expert hematopathologists. The key manipulations in our study included examining the speed-accuracy tradeoff as well as the impact of prior expectations on decisions. In addition, our study examined individual differences in decision-making by comparing task performance to domain general visual ability (as measured using the Novel Object Memory Test (NOMT) (Richler et al., 2017). Using Signal Detection Theory (SDT) and the Diffusion Decision Model (DDM), we found many similarities between expert and novices in our task. While experts tended to have better discriminability, the two groups responded similarly to time pressure (i.e., reduced caution under speed instructions in the DDM) and to the introduction of a probabilistic cue (i.e., increased response bias in the DDM). These results have important implications for training in this area as well as using novice participants in research on medical image perception and decision-making.
Similar to intelligent multicellular neural networks controlling human brains, even single cells surprisingly are able to make intelligent decisions to classify several external stimuli or to associate them. This happens because of the fact that gene regulatory networks can perform as perceptrons, simple intelligent schemes known from studies on Artificial Intelligence. We study the role of genetic noise in intelligent decision making at the genetic level and show that noise can play a constructive role helping cells to make a proper decision. We show this using the example of a simple genetic classifier able to classify two external stimuli.
Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple expert human operators. Hence, there is demand for an autonomous cinematographer that can reason about both geometry and scene context in real-time. Existing approaches do not address all aspects of this problem; they either require high-precision motion-capture systems or GPS tags to localize targets, rely on prior maps of the environment, plan for short time horizons, or only follow artistic guidelines specified before flight. In this work, we address the problem in its entirety and propose a complete system for real-time aerial cinematography that for the first time combines: (1) vision-based target estimation; (2) 3D signed-distance mapping for occlusion estimation; (3) efficient trajectory optimization for long time-horizon camera motion; and (4) learning-based artistic shot selection. We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments. Our results indicate that our system can operate reliably in the real world without restrictive assumptions. We also provide in-depth analysis and discussions for each module, with the hope that our design tradeoffs can generalize to other related applications. Videos of the complete system can be found at: https://youtu.be/ookhHnqmlaU.