Do you want to publish a course? Click here

Eye-Movement Control During the Reading of Chinese: An Analysis Using the Landolt-C Paradigm

268   0   0.0 ( 0 )
 Added by Yanping Liu
 Publication date 2015
  fields Biology
and research's language is English




Ask ChatGPT about the research

Participants in an eye-movement experiment performed a modified version of the Landolt-C paradigm (Williams & Pollatsek, 2007) in which they searched for target squares embedded in linear arrays of spatially contiguous words (i.e., short sequences of squares having missing segments of variable size and orientation). Although the distributions of single- and first-of-multiple fixation locations replicated previous patterns suggesting saccade targeting (e.g., Yan, Kliegl, Richter, Nuthmann, & Shu, 2010), the distribution of all forward fixation locations was uniform, suggesting the absence of specific saccade targets. Furthermore, properties of the words (e.g., gap size) also influenced fixation durations and forward saccade length, suggesting that on-going processing affects decisions about when and where (i.e., how far) to move the eyes. The theoretical implications of these results for existing and future accounts of eye-movement control are discussed.



rate research

Read More

Motor imagery (MI) is a mental representation of motor behavior that has been widely used as a control method for a brain-computer interface (BCI), allowing communication for the physically impaired. The performance of MI based BCI mainly depends on the subjects ability to self-modulate EEG signals. Proper training can help naive subjects learn to modulate brain activity proficiently. However, training subjects typically involves abstract motor tasks and is time-consuming. To improve the performance of naive subjects during motor imagery, a novel paradigm was presented that would guide naive subjects to modulate brain activity effectively. In this new paradigm, pictures of the left or right hand were used as cues for subjects to finish the motor imagery task. Fourteen healthy subjects (11 male, aged 22-25 years, mean 23.6+/-1.16) participated in this study. The task was to imagine writing a Chinese character. Specifically, subjects could imagine hand movements following the sequence of writing strokes in the Chinese character. This paradigm was meant to find an effective and familiar action for most Chinese people, to provide them with a specific, extensively practiced task and help them modulate brain activity. Results showed that the writing task paradigm yielded significantly better performance than the traditional arrow paradigm (p<0.001). Questionnaire replies indicated that most subjects thought the new paradigm was easier and more comfortable. The proposed new motor imagery paradigm could guide subjects to help them modulate brain activity effectively. Results showed that there were significant improvements using new paradigm, both in classification accuracy and usability.
206 - Yanping Liu , Huan Wei 2015
The word-based account of saccades drawn by a central gravity of the PVL is supported by two pillars of evidences. The first is the finding of the initial fixation location on a word resembled a normal distribution (Rayner, 1979). The other is the finding of a moderate slope coefficient between the launch site and the landing site (b=0.49, see McConkie, Kerr, Reddix, & Zola, 1988). Four simulations on different saccade targeting strategies and one eye-movement experiment of Chinese reading have been conducted to evaluate the two findings. We demonstrated that the current understanding of the word-based account is not conclusive by showing an alternative strategy of the word-based account and identifying the problem with the calculation of the slope coefficient. Although almost all the computational models of eye-movement control during reading have built on the two findings, future efforts should be directed to understand the precise contribution of different saccade targeting strategies, and to know how their weighting might vary across desperate writing systems.
While theories postulating a dual cognitive system take hold, quantitative confirmations are still needed to understand and identify interactions between the two systems or conflict events. Eye movements are among the most direct markers of the individual attentive load and may serve as an important proxy of information. In this work we propose a computational method, within a modified visual version of the well-known Stroop test, for the identification of different tasks and potential conflicts events between the two systems through the collection and processing of data related to eye movements. A statistical analysis shows that the selected variables can characterize the variation of attentive load within different scenarios. Moreover, we show that Machine Learning techniques allow to distinguish between different tasks with a good classification accuracy and to investigate more in depth the gaze dynamics.
Electrocorticogram (ECoG)-based brain computer interfaces (BCI) can potentially control upper extremity prostheses to restore independent function to paralyzed individuals. However, current research is mostly restricted to the offline decoding of finger or 2D arm movement trajectories, and these results are modest. This study seeks to improve the fundamental understanding of the ECoG signal features underlying upper extremity movements to guide better BCI design. Subjects undergoing ECoG electrode implantation performed a series of elementary upper extremity movements in an intermittent flexion and extension manner. It was found that movement velocity, $dottheta$, had a high positive (negative) correlation with the instantaneous power of the ECoG high-$gamma$ band (80-160 Hz) during flexion (extension). Also, the correlation was low during idling epochs. Visual inspection of the ECoG high-$gamma$ band revealed power bursts during flexion/extension events that have a waveform that strongly resembles the corresponding flexion/extension event as seen on $dottheta$. These high-$gamma$ bursts were present in all elementary movements, and were spatially distributed in a somatotopic fashion. Thus, it can be concluded that the high-$gamma$ power of ECoG strongly encodes for movement trajectories, and can be used as an input feature in future BCIs.
134 - Lucile Rapin 2014
Purpose: Auditory verbal hallucinations (AVHs) are speech perceptions in the absence of a external stimulation. An influential theoretical account of AVHs in schizophrenia claims that a deficit in inner speech monitoring would cause the verbal thoughts of the patient to be perceived as external voices. The account is based on a predictive control model, in which verbal self-monitoring is implemented. The aim of this study was to examine lip muscle activity during AVHs in schizophrenia patients, in order to check whether inner speech occurred. Methods: Lip muscle activity was recorded during covert AVHs (without articulation) and rest. Surface electromyography (EMG) was used on eleven schizophrenia patients. Results: Our results show an increase in EMG activity in the orbicularis oris inferior muscle, during covert AVHs relative to rest. This increase is not due to general muscular tension since there was no increase of muscular activity in the forearm muscle. Conclusion: This evidence that AVHs might be self-generated inner speech is discussed in the framework of a predictive control model. Further work is needed to better describe how the inner speech monitoring dysfunction occurs and how inner speech is controlled and monitored. This will help better understanding how AVHs occur.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا