No Arabic abstract
We apply three optical coherence tomography (OCT) image analysis techniques to extract morphometric information from OCT images obtained on peripheral nerves of rat. The accuracy of each technique is evaluated against histological measurements accurate to +/-1 um. The three OCT techniques are: 1) average depth resolved profile (ADRP); 2) autoregressive spectral estimation (AR-SE); and, 3) correlation of the derivative spectral estimation (CoD-SE). We introduce a scanning window to the ADRP technique which provides transverse resolution, and improves epineurium thickness estimates - with the number of analysed images showing agreement with histology increasing from 2/10 to 5/10 (Kruskal-Wallis test, {alpha} = 0.05). A new method of estimating epineurium thickness, using the AR-SE technique, showed agreement with histology in 6/10 analysed images (Kruskal-Wallis test, {alpha} = 0.05). Using a tissue sample in which histology identified two fascicles with an estimated difference in mean fibre diameter of 4 um, the AR-SE and CoD-SE techniques both correctly identified the fascicle with larger fibre diameter distribution but incorrectly estimated the magnitude of this difference as 0.5um. The ability of OCT signal analysis techniques to extract accurate morphometric details from peripheral nerve is promising but restricted in depth by scattering in adipose and neural tissues.
Since the introduction of optical coherence tomography (OCT), it has been possible to study the complex 3D morphological changes of the optic nerve head (ONH) tissues that occur along with the progression of glaucoma. Although several deep learning (DL) techniques have been recently proposed for the automated extraction (segmentation) and quantification of these morphological changes, the device specific nature and the difficulty in preparing manual segmentations (training data) limit their clinical adoption. With several new manufacturers and next-generation OCT devices entering the market, the complexity in deploying DL algorithms clinically is only increasing. To address this, we propose a DL based 3D segmentation framework that is easily translatable across OCT devices in a label-free manner (i.e. without the need to manually re-segment data for each device). Specifically, we developed 2 sets of DL networks. The first (referred to as the enhancer) was able to enhance OCT image quality from 3 OCT devices, and harmonized image-characteristics across these devices. The second performed 3D segmentation of 6 important ONH tissue layers. We found that the use of the enhancer was critical for our segmentation network to achieve device independency. In other words, our 3D segmentation network trained on any of 3 devices successfully segmented ONH tissue layers from the other two devices with high performance (Dice coefficients > 0.92). With such an approach, we could automatically segment images from new OCT devices without ever needing manual segmentation data from such devices.
A sensation of fullness in the bladder is a regular experience, yet the mechanisms that act to generate this sensation remain poorly understood. This is an important issue because of the clinical problems that can result when this system does not function properly. The aim of the study group activity was to develop mathematical models that describe the mechanics of bladder filling, and how stretch modulates the firing rate of afferent nerves. Several models were developed, which were qualitatively consistent with experimental data obtained from a mouse model.
Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 clean B-scans (multi-frame B-scans), and their corresponding noisy B-scans (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from $4.02 pm 0.68$ dB (single-frame) to $8.14 pm 1.03$ dB (denoised). For all the ONH tissues, the mean CNR increased from $3.50 pm 0.56$ (single-frame) to $7.63 pm 1.81$ (denoised). The MSSIM increased from $0.13 pm 0.02$ (single frame) to $0.65 pm 0.03$ (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort.
We present a finite difference time domain (FDTD) model for computation of A line scans in time domain optical coherence tomography (OCT). By simulating only the end of the two arms of the interferometer and computing the interference signal in post processing, it is possible to reduce the computational time required by the simulations and, thus, to simulate much bigger environments. Moreover, it is possible to simulate successive A lines and thus obtaining a cross section of the sample considered. In this paper we present the model applied to two different samples: a glass rod filled with water-sucrose solution at different concentrations and a peripheral nerve. This work demonstrates the feasibility of using OCT for non-invasive, direct optical monitoring of peripheral nerve activity, which is a long-sought goal of neuroscience.
Purpose: To develop a deep learning approach to digitally-stain optical coherence tomography (OCT) images of the optic nerve head (ONH). Methods: A horizontal B-scan was acquired through the center of the ONH using OCT (Spectralis) for 1 eye of each of 100 subjects (40 normal & 60 glaucoma). All images were enhanced using adaptive compensation. A custom deep learning network was then designed and trained with the compensated images to digitally stain (i.e. highlight) 6 tissue layers of the ONH. The accuracy of our algorithm was assessed (against manual segmentations) using the Dice coefficient, sensitivity, and specificity. We further studied how compensation and the number of training images affected the performance of our algorithm. Results: For images it had not yet assessed, our algorithm was able to digitally stain the retinal nerve fiber layer + prelamina, the retinal pigment epithelium, all other retinal layers, the choroid, and the peripapillary sclera and lamina cribrosa. For all tissues, the mean dice coefficient was $0.84 pm 0.03$, the mean sensitivity $0.92 pm 0.03$, and the mean specificity $0.99 pm 0.00$. Our algorithm performed significantly better when compensated images were used for training. Increasing the number of images (from 10 to 40) to train our algorithm did not significantly improve performance, except for the RPE. Conclusion. Our deep learning algorithm can simultaneously stain neural and connective tissues in ONH images. Our approach offers a framework to automatically measure multiple key structural parameters of the ONH that may be critical to improve glaucoma management.