No Arabic abstract
Choroid is the vascular layer of the eye, which is directly related to the incidence and severity of many ocular diseases. Optical Coherence Tomography (OCT) is capable of imaging both the cross-sectional view of retina and choroid, but the segmentation of the choroid region is challenging because of the fuzzy choroid-sclera interface (CSI). In this paper, we propose a biomarker infused global-to-local network (BioNet) for choroid segmentation, which segments the choroid with higher credibility and robustness. Firstly, our method trains a biomarker prediction network to learn the features of the biomarker. Then a global multi-layers segmentation module is applied to segment the OCT image into 12 layers. Finally, the global multi-layered result and the original OCT image are fed into a local choroid segmentation module to segment the choroid region with the biomarker infused as regularizer. We conducted comparison experiments with the state-of-the-art methods on a dataset (named AROD). The experimental results demonstrate the superiority of our method with $90.77%$ Dice-index and 6.23 pixels Average-unsigned-surface-detection-error, etc.
Automated vascular segmentation on optical coherence tomography angiography (OCTA) is important for the quantitative analyses of retinal microvasculature in neuroretinal and systemic diseases. Despite recent improvements, artifacts continue to pose challenges in segmentation. Our study focused on removing the speckle noise artifact from OCTA images when performing segmentation. Speckle noise is common in OCTA and is particularly prominent over large non-perfusion areas. It may interfere with the proper assessment of retinal vasculature. In this study, we proposed a novel Supervision Vessel Segmentation network (SVS-net) to detect vessels of different sizes. The SVS-net includes a new attention-based module to describe vessel positions and facilitate the understanding of the network learning process. The model is efficient and explainable and could be utilized to reduce the need for manual labeling. Our SVS-net had better performance in accuracy, recall, F1 score, and Kappa score when compared to other well recognized models.
Fetal cortical plate segmentation is essential in quantitative analysis of fetal brain maturation and cortical folding. Manual segmentation of the cortical plate, or manual refinement of automatic segmentations is tedious and time-consuming. Automatic segmentation of the cortical plate, on the other hand, is challenged by the relatively low resolution of the reconstructed fetal brain MRI scans compared to the thin structure of the cortical plate, partial voluming, and the wide range of variations in the morphology of the cortical plate as the brain matures during gestation. To reduce the burden of manual refinement of segmentations, we have developed a new and powerful deep learning segmentation method. Our method exploits new deep attentive modules with mixed kernel convolutions within a fully convolutional neural network architecture that utilizes deep supervision and residual connections. We evaluated our method quantitatively based on several performance measures and expert evaluations. Results show that our method outperforms several state-of-the-art deep models for segmentation, as well as a state-of-the-art multi-atlas segmentation technique. We achieved average Dice similarity coefficient of 0.87, average Hausdorff distance of 0.96 mm, and average symmetric surface difference of 0.28 mm on reconstructed fetal brain MRI scans of fetuses scanned in the gestational age range of 16 to 39 weeks. With a computation time of less than 1 minute per fetal brain, our method can facilitate and accelerate large-scale studies on normal and altered fetal brain cortical maturation and folding.
Semantic segmentation in very high resolution (VHR) aerial images is one of the most challenging tasks in remote sensing image understanding. Most of the current approaches are based on deep convolutional neural networks (DCNNs). However, standard convolution with local receptive fields fails in modeling global dependencies. Prior researches have indicated that attention-based methods can capture long-range dependencies and further reconstruct the feature maps for better representation. Nevertheless, limited by the mere perspective of spacial and channel attention and huge computation complexity of self-attention mechanism, it is unlikely to model the effective semantic interdependencies between each pixel-pair of remote sensing data of complex spectra. In this work, we propose a novel attention-based framework named Hybrid Multiple Attention Network (HMANet) to adaptively capture global correlations from the perspective of space, channel and category in a more effective and efficient manner. Concretely, a class augmented attention (CAA) module embedded with a class channel attention (CCA) module can be used to compute category-based correlation and recalibrate the class-level information. Additionally, we introduce a simple yet effective region shuffle attention (RSA) module to reduce feature redundant and improve the efficiency of self-attention mechanism via region-wise representations. Extensive experimental results on the ISPRS Vaihingen and Potsdam benchmark demonstrate the effectiveness and efficiency of our HMANet over other state-of-the-art methods.
Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays. However, data scarcity has been the roadblock of applying deep learning models directly on prostate multiparametric MRI (mpMRI). Although model interpretation has been heavily studied for natural images for the past few years, there has been a lack of interpretation of deep learning models trained on medical images. This work designs a customized workflow for the small and imbalanced data set of prostate mpMRI where features were extracted from a deep learning model and then analyzed by a traditional machine learning classifier. In addition, this work contributes to revealing how deep learning models interpret mpMRI for prostate cancer patients stratification.
The speckle statistics of optical coherence tomography images of biological tissue have been studied using several historical probability density functions. A recent hypothesis implies that underlying power-law distributions in the medium structure, such as the fractal branching vasculature, will contribute to power-law probability distributions of speckle statistics. Specifically, these are the Burr type XII distribution for speckle amplitude, the Lomax distribution for intensity, and the generalized logistic distribution for log amplitude. In this study, these three distributions are fitted to histogram data from nine optical coherence tomography scans of various biological tissues and samples. The distributions are also compared with conventional distributions such as the Rayleigh, K, and gamma distributions. The results indicate that these newer distributions based on power laws are, in general, more appropriate models and support the plausibility of their use for characterizing biological tissue. Potentially, the governing power-law parameter of these distributions could be used as a biomarker for tissue disease or pathology.