ترغب بنشر مسار تعليمي؟ اضغط هنا

Development of accurate human head models for personalized electromagnetic dosimetry using deep learning

161   0   0.0 ( 0 )
 نشر من قبل Essam Rashed
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The development of personalized human head models from medical images has become an important topic in the electromagnetic dosimetry field, including the optimization of electrostimulation, safety assessments, etc. Human head models are commonly generated via the segmentation of magnetic resonance images into different anatomical tissues. This process is time consuming and requires special experience for segmenting a relatively large number of tissues. Thus, it is challenging to accurately compute the electric field in different specific brain regions. Recently, deep learning has been applied for the segmentation of the human brain. However, most studies have focused on the segmentation of brain tissue only and little attention has been paid to other tissues, which are considerably important for electromagnetic dosimetry. In this study, we propose a new architecture for a convolutional neural network, named ForkNet, to perform the segmentation of whole human head structures, which is essential for evaluating the electrical field distribution in the brain. The proposed network can be used to generate personalized head models and applied for the evaluation of the electric field in the brain during transcranial magnetic stimulation. Our computational results indicate that the head models generated using the proposed network exhibit strong matching with those created via manual segmentation in an intra-scanner segmentation task.

قيم البحث

اقرأ أيضاً

In transmission X-ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision to other system parts or high attenuation at certain tilting angles. Image reconstruction from such lim ited angle data suffers from artifacts due to missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. Particularly, the U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi-category data to reduce artifacts in filtered back-projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in $100^circ$ limited angle tomography. For synthetic test data, the U-Net significantly reduces root-mean-square error (RMSE) from $2.55 times 10^{-3}$ {mu}m$^{-1}$ in the FBP reconstruction to $1.21 times 10^{-3}$ {mu}m$^{-1}$ in the U-Net reconstruction, and also improves structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least square denoising of measured projections, the RMSE and SSIM are further improved to $1.16 times 10^{-3}$ {mu}m$^{-1}$ and 0.932, respectively. For real test data, the proposed method remarkably improves the 3-D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nano-scale imaging in biology, nanoscience and materials science.
Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging. We explore the central prevailing themes of this emerging area and present a taxonomy that can b e used to categorize different problems and reconstruction methods. Our taxonomy is organized along two central axes: (1) whether or not a forward model is known and to what extent it is used in training and testing, and (2) whether or not the learning is supervised or unsupervised, i.e., whether or not the training relies on access to matched ground truth image and measurement pairs. We also discuss the trade-offs associated with these different reconstruction approaches, caveats and common failure modes, plus open problems and avenues for future work.
In this paper, the problem of head movement prediction for virtual reality videos is studied. In the considered model, a deep learning network is introduced to leverage position data as well as video frame content to predict future head movement. For optimizing data input into this neural network, data sample rate, reduced data, and long-period prediction length are also explored for this model. Simulation results show that the proposed approach yields 16.1% improvement in terms of prediction accuracy compared to a baseline approach that relies only on the position data.
As AI-based medical devices are becoming more common in imaging fields like radiology and histology, interpretability of the underlying predictive models is crucial to expand their use in clinical practice. Existing heatmap-based interpretability met hods such as GradCAM only highlight the location of predictive features but do not explain how they contribute to the prediction. In this paper, we propose a new interpretability method that can be used to understand the predictions of any black-box model on images, by showing how the input image would be modified in order to produce different predictions. A StyleGAN is trained on medical images to provide a mapping between latent vectors and images. Our method identifies the optimal direction in the latent space to create a change in the model prediction. By shifting the latent representation of an input image along this direction, we can produce a series of new synthetic images with changed predictions. We validate our approach on histology and radiology images, and demonstrate its ability to provide meaningful explanations that are more informative than GradCAM heatmaps. Our method reveals the patterns learned by the model, which allows clinicians to build trust in the models predictions, discover new biomarkers and eventually reveal potential biases.
Since the introduction of optical coherence tomography (OCT), it has been possible to study the complex 3D morphological changes of the optic nerve head (ONH) tissues that occur along with the progression of glaucoma. Although several deep learning ( DL) techniques have been recently proposed for the automated extraction (segmentation) and quantification of these morphological changes, the device specific nature and the difficulty in preparing manual segmentations (training data) limit their clinical adoption. With several new manufacturers and next-generation OCT devices entering the market, the complexity in deploying DL algorithms clinically is only increasing. To address this, we propose a DL based 3D segmentation framework that is easily translatable across OCT devices in a label-free manner (i.e. without the need to manually re-segment data for each device). Specifically, we developed 2 sets of DL networks. The first (referred to as the enhancer) was able to enhance OCT image quality from 3 OCT devices, and harmonized image-characteristics across these devices. The second performed 3D segmentation of 6 important ONH tissue layers. We found that the use of the enhancer was critical for our segmentation network to achieve device independency. In other words, our 3D segmentation network trained on any of 3 devices successfully segmented ONH tissue layers from the other two devices with high performance (Dice coefficients > 0.92). With such an approach, we could automatically segment images from new OCT devices without ever needing manual segmentation data from such devices.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا