ترغب بنشر مسار تعليمي؟ اضغط هنا

HemCNN: Deep Learning enables decoding of fNIRS cortical signals in hand grip motor tasks

76   0   0.0 ( 0 )
 نشر من قبل Pablo Ortega
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We solve the fNIRS left/right hand force decoding problem using a data-driven approach by using a convolutional neural network architecture, the HemCNN. We test HemCNNs decoding capabilities to decode in a streaming way the hand, left or right, from fNIRS data. HemCNN learned to detect which hand executed a grasp at a naturalistic hand action speed of $~1,$Hz, outperforming standard methods. Since HemCNN does not require baseline correction and the convolution operation is invariant to time translations, our method can help to unlock fNIRS for a variety of real-time tasks. Mobile brain imaging and mobile brain machine interfacing can benefit from this to develop real-world neuroscience and practical human neural interfacing based on BOLD-like signals for the evaluation, assistance and rehabilitation of force generation, such as fusion of fNIRS with EEG signals.

قيم البحث

اقرأ أيضاً

Non-invasive cortical neural interfaces have only achieved modest performance in cortical decoding of limb movements and their forces, compared to invasive brain-computer interfaces (BCIs). While non-invasive methodologies are safer, cheaper and vast ly more accessible technologies, signals suffer from either poor resolution in the space domain (EEG) or the temporal domain (BOLD signal of functional Near Infrared Spectroscopy, fNIRS). The non-invasive BCI decoding of bimanual force generation and the continuous force signal has not been realised before and so we introduce an isometric grip force tracking task to evaluate the decoding. We find that combining EEG and fNIRS using deep neural networks works better than linear models to decode continuous grip force modulations produced by the left and the right hand. Our multi-modal deep learning decoder achieves 55.2 FVAF[%] in force reconstruction and improves the decoding performance by at least 15% over each individual modality. Our results show a way to achieve continuous hand force decoding using cortical signals obtained with non-invasive mobile brain imaging has immediate impact for rehabilitation, restoration and consumer applications.
Convolutional neural networks (CNNs) have become a powerful technique to decode EEG and have become the benchmark for motor imagery EEG Brain-Computer-Interface (BCI) decoding. However, it is still challenging to train CNNs on multiple subjects EEG w ithout decreasing individual performance. This is known as the negative transfer problem, i.e. learning from dissimilar distributions causes CNNs to misrepresent each of them instead of learning a richer representation. As a result, CNNs cannot directly use multiple subjects EEG to enhance model performance directly. To address this problem, we extend deep transfer learning techniques to the EEG multi-subject training case. We propose a multi-branch deep transfer network, the Separate-Common-Separate Network (SCSN) based on splitting the networks feature extractors for individual subjects. We also explore the possibility of applying Maximum-mean discrepancy (MMD) to the SCSN (SCSN-MMD) to better align distributions of features from individual feature extractors. The proposed network is evaluated on the BCI Competition IV 2a dataset (BCICIV2a dataset) and our online recorded dataset. Results show that the proposed SCSN (81.8%, 53.2%) and SCSN-MMD (81.8%, 54.8%) outperformed the benchmark CNN (73.4%, 48.8%) on both datasets using multiple subjects. Our proposed networks show the potential to utilise larger multi-subject datasets to train an EEG decoder without being influenced by negative transfer.
Brain-computer interfaces (BCIs) have shown promising results in restoring motor function to individuals with spinal cord injury. These systems have traditionally focused on the restoration of upper extremity function; however, the lower extremities have received relatively little attention. Early feasibility studies used noninvasive electroencephalogram (EEG)-based BCIs to restore walking function to people with paraplegia. However, the limited spatiotemporal resolution of EEG signals restricted the application of these BCIs to elementary gait tasks, such as the initiation and termination of walking. To restore more complex gait functions, BCIs must accurately decode additional degrees of freedom from brain signals. In this study, we used subdurally recorded electrocorticogram (ECoG) signals from able-bodied subjects to design a decoder capable of predicting the walking state and step rate information. We recorded ECoG signals from the motor cortices of two individuals as they walked on a treadmill at different speeds. Our offline analysis demonstrated that the state information could be decoded from >16 minutes of ECoG data with an unprecedented accuracy of 99.8%. Additionally, using a Bayesian filter approach, we achieved an average correlation coefficient between the decoded and true step rates of 0.934. When combined, these decoders may yield decoding accuracies sufficient to safely operate present-day walking prostheses.
Functional magnetic resonance imaging produces high dimensional data, with a less then ideal number of labelled samples for brain decoding tasks (predicting brain states). In this study, we propose a new deep temporal convolutional neural network arc hitecture with spatial pooling for brain decoding which aims to reduce dimensionality of feature space along with improved classification performance. Temporal representations (filters) for each layer of the convolutional model are learned by leveraging unlabelled fMRI data in an unsupervised fashion with regularized autoencoders. Learned temporal representations in multiple levels capture the regularities in the temporal domain and are observed to be a rich bank of activation patterns which also exhibit similarities to the actual hemodynamic responses. Further, spatial pooling layers in the convolutional architecture reduce the dimensionality without losing excessive information. By employing the proposed temporal convolutional architecture with spatial pooling, raw input fMRI data is mapped to a non-linear, highly-expressive and low-dimensional feature space where the final classification is conducted. In addition, we propose a simple heuristic approach for hyper-parameter tuning when no validation data is available. Proposed method is tested on a ten class recognition memory experiment with nine subjects. The results support the efficiency and potential of the proposed model, compared to the baseline multi-voxel pattern analysis techniques.
Convolutional Neural Networks (CNN) outperform traditional classification methods in many domains. Recently these methods have gained attention in neuroscience and particularly in brain-computer interface (BCI) community. Here, we introduce a CNN opt imized for classification of brain states from magnetoencephalographic (MEG) measurements. Our CNN design is based on a generative model of the electromagnetic (EEG and MEG) brain signals and is readily interpretable in neurophysiological terms. We show here that the proposed network is able to decode event-related responses as well as modulations of oscillatory brain activity and that it outperforms more complex neural networks and traditional classifiers used in the field. Importantly, the model is robust to inter-individual differences and can successfully generalize to new subjects in offline and online classification.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا