ترغب بنشر مسار تعليمي؟ اضغط هنا

Single Shot Reversible GAN for BCG artifact removal in simultaneous EEG-fMRI

67   0   0.0 ( 0 )
 نشر من قبل Guang Lin
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Simultaneous EEG-fMRI acquisition and analysis technology has been widely used in various research fields of brain science. However, how to remove the ballistocardiogram (BCG) artifacts in this scenario remains a huge challenge. Because it is impossible to obtain clean and BCG-contaminated EEG signals at the same time, BCG artifact removal is a typical unpaired signal-to-signal problem. To solve this problem, this paper proposed a new GAN training model - Single Shot Reversible GAN (SSRGAN). The model is allowing bidirectional input to better combine the characteristics of the two types of signals, instead of using two independent models for bidirectional conversion as in the past. Furthermore, the model is decomposed into multiple independent convolutional blocks with specific functions. Through additional training of the blocks, the local representation ability of the model is improved, thereby improving the overall model performance. Experimental results show that, compared with existing methods, the method proposed in this paper can remove BCG artifacts more effectively and retain the useful EEG information.



قيم البحث

اقرأ أيضاً

71 - Xueqing Liu , Linbi Hong , 2020
Simultaneous EEG-fMRI is a multi-modal neuroimaging technique that provides complementary spatial and temporal resolution for inferring a latent source space of neural activity. In this paper we address this inference problem within the framework of transcoding -- mapping from a specific encoding (modality) to a decoding (the latent source space) and then encoding the latent source space to the other modality. Specifically, we develop a symmetric method consisting of a cyclic convolutional transcoder that transcodes EEG to fMRI and vice versa. Without any prior knowledge of either the hemodynamic response function or lead field matrix, the method exploits the temporal and spatial relationships between the modalities and latent source spaces to learn these mappings. We show, for real EEG-fMRI data, how well the modalities can be transcoded from one to another as well as the source spaces that are recovered, all on unseen data. In addition to enabling a new way to symmetrically infer a latent source space, the method can also be seen as low-cost computational neuroimaging -- i.e. generating an expensive fMRI BOLD image from low cost EEG data.
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique for studying brain activity. During an fMRI session, the subject executes a set of tasks (task-related fMRI study) or no tasks (resting-state fMRI), and a sequence of 3-D brain images is obtained for further analysis. In the course of fMRI, some sources of activation are caused by noise and artifacts. The removal of these sources is essential before the analysis of the brain activations. Deep Neural Network (DNN) architectures can be used for denoising and artifact removal. The main advantage of DNN models is the automatic learning of abstract and meaningful features, given the raw data. This work presents advanced DNN architectures for noise and artifact classification, using both spatial and temporal information in resting-state fMRI sessions. The highest performance is achieved by a voting schema using information from all the domains, with an average accuracy of over 98% and a very good balance between the metrics of sensitivity and specificity (98.5% and 97.5% respectively).
Simultaneously recorded electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) can be used to non-invasively measure the spatiotemporal dynamics of the human brain. One challenge is dealing with the artifacts that each modality introduces into the other when the two are recorded concurrently, for example the ballistocardiogram (BCG). We conducted a preliminary comparison of three different MR compatible EEG recording systems and assessed their performance in terms of single-trial classification of the EEG when simultaneously collecting fMRI. We found tradeoffs across all three systems, for example varied ease of setup and improved classification accuracy with reference electrodes (REF) but not for pulse artifact subtraction (PAS) or reference layer adaptive filtering (RLAF).
171 - Anique Akhtar , Wen Gao , Li Li 2021
Photo-realistic point cloud capture and transmission are the fundamental enablers for immersive visual communication. The coding process of dynamic point clouds, especially video-based point cloud compression (V-PCC) developed by the MPEG standardiza tion group, is now delivering state-of-the-art performance in compression efficiency. V-PCC is based on the projection of the point cloud patches to 2D planes and encoding the sequence as 2D texture and geometry patch sequences. However, the resulting quantization errors from coding can introduce compression artifacts, which can be very unpleasant for the quality of experience (QoE). In this work, we developed a novel out-of-the-loop point cloud geometry artifact removal solution that can significantly improve reconstruction quality without additional bandwidth cost. Our novel framework consists of a point cloud sampling scheme, an artifact removal network, and an aggregation scheme. The point cloud sampling scheme employs a cube-based neighborhood patch extraction to divide the point cloud into patches. The geometry artifact removal network then processes these patches to obtain artifact-removed patches. The artifact-removed patches are then merged together using an aggregation scheme to obtain the final artifact-removed point cloud. We employ 3D deep convolutional feature learning for geometry artifact removal that jointly recovers both the quantization direction and the quantization noise level by exploiting projection and quantization prior. The simulation results demonstrate that the proposed method is highly effective and can considerably improve the quality of the reconstructed point cloud.
We present a general technique that performs both artifact removal and image compression. For artifact removal, we input a JPEG image and try to remove its compression artifacts. For compression, we input an image and process its 8 by 8 blocks in a s equence. For each block, we first try to predict its intensities based on previous blocks; then, we store a residual with respect to the input image. Our technique reuses JPEGs legacy compression and decompression routines. Both our artifact removal and our image compression techniques use the same deep network, but with different training weights. Our technique is simple and fast and it significantly improves the performance of artifact removal and image compression.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا