Finding New Diagnostic Information for Detecting Glaucoma using Neural Networks


الملخص بالإنكليزية

We describe a new approach to automated Glaucoma detection in 3D Spectral Domain Optical Coherence Tomography (OCT) optic nerve scans. First, we gathered a unique and diverse multi-ethnic dataset of OCT scans consisting of glaucoma and non-glaucomatous cases obtained from four tertiary care eye hospitals located in four different countries. Using this longitudinal data, we achieved state-of-the-art results for automatically detecting Glaucoma from a single raw OCT using a 3D Deep Learning system. These results are close to human doctors in a variety of settings across heterogeneous datasets and scanning environments. To verify correctness and interpretability of the automated categorization, we used saliency maps to find areas of focus for the model. Matching human doctor behavior, the model predictions indeed correlated with the conventional diagnostic parameters in the OCT printouts, such as the retinal nerve fiber layer. We further used our model to find new areas in the 3D data that are presently not being identified as a diagnostic parameter to detect glaucoma by human doctors. Namely, we found that the Lamina Cribrosa (LC) region can be a valuable source of helpful diagnostic information previously unavailable to doctors during routine clinical care because it lacks a quantitative printout. Our model provides such volumetric quantification of this region. We found that even when a majority of the RNFL is removed, the LC region can distinguish glaucoma. This is clinically relevant in high myopes, when the RNFL is already reduced, and thus the LC region may help differentiate glaucoma in this confounding situation. We further generalize this approach to create a new algorithm called DiagFind that provides a recipe for finding new diagnostic information in medical imagery that may have been previously unusable by doctors.

تحميل البحث