No Arabic abstract
Background: Changes in choroidal thickness are associated with various ocular diseases and the choroid can be imaged using spectral-domain optical coherence tomography (SDOCT) and enhanced depth imaging OCT (EDIOCT). New Method: Eighty macular SDOCT volumes from 80 patients were obtained using the Zeiss Cirrus machine. Eleven additional control subjects had two Cirrus scans done in one visit along with EDIOCT using the Heidelberg Spectralis machine. To automatically segment choroidal layers from the OCT volumes, our graph-theoretic approach was utilized. The segmentation results were compared with reference standards from two graders, and the accuracy of automated segmentation was calculated using unsigned to signed border positioning thickness errors and Dice similarity coefficient (DSC). The repeatability and reproducibility of our choroidal thicknesses were determined by intraclass correlation coefficient (ICC), coefficient of variation (CV), and repeatability coefficient (RC). Results: The mean unsigned to signed border positioning errors for the choroidal inner and outer surfaces are 3.39plusminus1.26microns (mean plusminus SD) to minus1.52 plusminus 1.63microns and 16.09 plusminus 6.21microns to 4.73 plusminus 9.53microns, respectively. The mean unsigned to signed choroidal thickness errors are 16.54 plusminus 6.47microns to 6.25 plusminus 9.91microns, and the mean DSC is 0.949 plusminus 0.025. The ICC (95% CI), CV, RC values are 0.991 (0.977 to 0.997), 2.48%, 3.15microns for the repeatability and 0.991 (0.977 to 0.997), 2.49%, 0.53microns for the reproducibility studies, respectively. Comparison with Existing Method(s): The proposed method outperformed our previous method using choroidal vessel segmentation and inter-grader variability. Conclusions: This automated segmentation method can reliably measure choroidal thickness using different OCT platforms.
This study is to demonstrate deep learning for automated artery-vein (AV) classification in optical coherence tomography angiography (OCTA). The AV-Net, a fully convolutional network (FCN) based on modified U-shaped CNN architecture, incorporates enface OCT and OCTA to differentiate arteries and veins. For the multi-modal training process, the enface OCT works as a near infrared fundus image to provide vessel intensity profiles, and the OCTA contains blood flow strength and vessel geometry features. A transfer learning process is also integrated to compensate for the limitation of available dataset size of OCTA, which is a relatively new imaging modality. By providing an average accuracy of 86.75%, the AV-Net promises a fully automated platform to foster clinical deployment of differential AV analysis in OCTA.
Normal Pressure Hydrocephalus (NPH) is one of the few reversible forms of dementia, Due to their low cost and versatility, Computed Tomography (CT) scans have long been used as an aid to help diagnose intracerebral anomalies such as NPH. However, no well-defined and effective protocol currently exists for the analysis of CT scan-based ventricular, cerebral mass and subarachnoid space volumes in the setting of NPH. The Evans ratio, an approximation of the ratio of ventricle to brain volume using only one 2D slice of the scan, has been proposed but is not robust. Instead of manually measuring a 2-dimensional proxy for the ratio of ventricle volume to brain volume, this study proposes an automated method of calculating the brain volumes for better recognition of NPH from a radiological standpoint. The method first aligns the subject CT volume to a common space through an affine transformation, then uses a random forest classifier to mask relevant tissue types. A 3D morphological segmentation method is used to partition the brain volume, which in turn is used to train machine learning methods to classify the subjects into non-NPH vs. NPH based on volumetric information. The proposed algorithm has increased sensitivity compared to the Evans ratio thresholding method.
Automated drusen segmentation in retinal optical coherence tomography (OCT) scans is relevant for understanding age-related macular degeneration (AMD) risk and progression. This task is usually performed by segmenting the top/bottom anatomical interfaces that define drusen, the outer boundary of the retinal pigment epithelium (OBRPE) and the Bruchs membrane (BM), respectively. In this paper we propose a novel multi-decoder architecture that tackles drusen segmentation as a multitask problem. Instead of training a multiclass model for OBRPE/BM segmentation, we use one decoder per target class and an extra one aiming for the area between the layers. We also introduce connections between each class-specific branch and the additional decoder to increase the regularization effect of this surrogate task. We validated our approach on private/public data sets with 166 early/intermediate AMD Spectralis, and 200 AMD and control Bioptigen OCT volumes, respectively. Our method consistently outperformed several baselines in both layer and drusen segmentation evaluations.
With the FDA approval of Artificial Intelligence (AI) for point-of-care clinical diagnoses, model generalizability is of the utmost importance as clinical decision-making must be domain-agnostic. A method of tackling the problem is to increase the dataset to include images from a multitude of domains; while this technique is ideal, the security requirements of medical data is a major limitation. Additionally, researchers with developed tools benefit from the addition of open-sourced data, but are limited by the difference in domains. Herewith, we investigated the implementation of a Cycle-Consistent Generative Adversarial Networks (CycleGAN) for the domain adaptation of Optical Coherence Tomography (OCT) volumes. This study was done in collaboration with the Biomedical Optics Research Group and Functional & Anatomical Imaging & Shape Analysis Lab at Simon Fraser University. In this study, we investigated a learning-based approach of adapting the domain of a publicly available dataset, UK Biobank dataset (UKB). To evaluate the performance of domain adaptation, we utilized pre-existing retinal layer segmentation tools developed on a different set of RETOUCH OCT data. This study provides insight on state-of-the-art tools for domain adaptation compared to traditional processing techniques as well as a pipeline for adapting publicly available retinal data to the domains previously used by our collaborators.