Do you want to publish a course? Click here

Your Eyes Say Youre Lying: An Eye Movement Pattern Analysis for Face Familiarity and Deceptive Cognition

277   0   0.0 ( 0 )
 Added by Zhenyue Qin
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Eye movement patterns reflect human latent internal cognitive activities. We aim to discover eye movement patterns during face recognition under different cognitions of information concealing. These cognitions include the degrees of face familiarity and deception or not, namely telling the truth when observing familiar and unfamiliar faces, and deceiving in front of familiar faces. We apply Hidden Markov models with Gaussian emission to generalize regions and trajectories of eye fixation points under the above three conditions. Our results show that both eye movement patterns and eye gaze regions become significantly different during deception compared with truth-telling. We show the feasibility of detecting deception and further cognitive activity classification using eye movement patterns.



rate research

Read More

60 - Ge Ren , Jun Wu , Gaolei Li 2021
The smartphone and laptop can be unlocked by face or fingerprint recognition, while neural networks which confront numerous requests every day have little capability to distinguish between untrustworthy and credible users. It makes model risky to be traded as a commodity. Existed research either focuses on the intellectual property rights ownership of the commercialized model, or traces the source of the leak after pirated models appear. Nevertheless, active identifying users legitimacy before predicting output has not been considered yet. In this paper, we propose Model-Lock (M-LOCK) to realize an end-to-end neural network with local dynamic access control, which is similar to the automatic locking function of the smartphone to prevent malicious attackers from obtaining available performance actively when you are away. Three kinds of model training strategy are essential to achieve the tremendous performance divergence between certified and suspect input in one neural network. Extensive experiments based on MNIST, FashionMNIST, CIFAR10, CIFAR100, SVHN and GTSRB datasets demonstrated the feasibility and effectiveness of the proposed scheme.
Deep neural networks for video-based eye tracking have demonstrated resilience to noisy environments, stray reflections, and low resolution. However, to train these networks, a large number of manually annotated images are required. To alleviate the cumbersome process of manual labeling, computer graphics rendering is employed to automatically generate a large corpus of annotated eye images under various conditions. In this work, we introduce a synthetic eye image generation platform that improves upon previous work by adding features such as an active deformable iris, an aspherical cornea, retinal retro-reflection, gaze-coordinated eye-lid deformations, and blinks. To demonstrate the utility of our platform, we render images reflecting the represented gaze distributions inherent in two publicly available datasets, NVGaze and OpenEDS. We also report on the performance of two semantic segmentation architectures (SegNet and RITnet) trained on rendered images and tested on the original datasets.
Myopia is an eye condition that makes it difficult for people to focus on faraway objects. It has become one of the most serious eye conditions worldwide and negatively impacts the quality of life of those who suffer from it. Although myopia is prevalent, many non-myopic people have misconceptions about it and encounter challenges empathizing with myopia situations and those who suffer from it. In this research, we developed two virtual reality (VR) games, (1) Myopic Bike and (2) Say Hi, to provide a means for the non-myopic population to experience the frustration and difficulties of myopic people. Our two games simulate two inconvenient daily life scenarios (riding a bicycle and greeting someone on the street) that myopic people encounter when not wearing glasses. We evaluated four participants game experiences through questionnaires and semi-structured interviews. Overall, our two VR games can create an engaging and non-judgmental experience for the non-myopic population to better understand and empathize with those who suffer from myopia.
Participants in an eye-movement experiment performed a modified version of the Landolt-C paradigm (Williams & Pollatsek, 2007) in which they searched for target squares embedded in linear arrays of spatially contiguous words (i.e., short sequences of squares having missing segments of variable size and orientation). Although the distributions of single- and first-of-multiple fixation locations replicated previous patterns suggesting saccade targeting (e.g., Yan, Kliegl, Richter, Nuthmann, & Shu, 2010), the distribution of all forward fixation locations was uniform, suggesting the absence of specific saccade targets. Furthermore, properties of the words (e.g., gap size) also influenced fixation durations and forward saccade length, suggesting that on-going processing affects decisions about when and where (i.e., how far) to move the eyes. The theoretical implications of these results for existing and future accounts of eye-movement control are discussed.
158 - Vivian Lai , Han Liu , Chenhao Tan 2020
To support human decision making with machine learning models, we often need to elucidate patterns embedded in the models that are unsalient, unknown, or counterintuitive to humans. While existing approaches focus on explaining machine predictions with real-time assistance, we explore model-driven tutorials to help humans understand these patterns in a training phase. We consider both tutorials with guidelines from scientific papers, analogous to current practices of science communication, and automatically selected examples from training data with explanations. We use deceptive review detection as a testbed and conduct large-scale, randomized human-subject experiments to examine the effectiveness of such tutorials. We find that tutorials indeed improve human performance, with and without real-time assistance. In particular, although deep learning provides superior predictive performance than simple models, tutorials and explanations from simple models are more useful to humans. Our work suggests future directions for human-centered tutorials and explanations towards a synergy between humans and AI.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا