Do you want to publish a course? Click here

AV-Net: Deep learning for fully automated artery-vein classification in optical coherence tomography angiography

78   0   0.0 ( 0 )
 Added by Minhaj Nur Alam
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This study is to demonstrate deep learning for automated artery-vein (AV) classification in optical coherence tomography angiography (OCTA). The AV-Net, a fully convolutional network (FCN) based on modified U-shaped CNN architecture, incorporates enface OCT and OCTA to differentiate arteries and veins. For the multi-modal training process, the enface OCT works as a near infrared fundus image to provide vessel intensity profiles, and the OCTA contains blood flow strength and vessel geometry features. A transfer learning process is also integrated to compensate for the limitation of available dataset size of OCTA, which is a relatively new imaging modality. By providing an average accuracy of 86.75%, the AV-Net promises a fully automated platform to foster clinical deployment of differential AV analysis in OCTA.

rate research

Read More

Automated vascular segmentation on optical coherence tomography angiography (OCTA) is important for the quantitative analyses of retinal microvasculature in neuroretinal and systemic diseases. Despite recent improvements, artifacts continue to pose challenges in segmentation. Our study focused on removing the speckle noise artifact from OCTA images when performing segmentation. Speckle noise is common in OCTA and is particularly prominent over large non-perfusion areas. It may interfere with the proper assessment of retinal vasculature. In this study, we proposed a novel Supervision Vessel Segmentation network (SVS-net) to detect vessels of different sizes. The SVS-net includes a new attention-based module to describe vessel positions and facilitate the understanding of the network learning process. The model is efficient and explainable and could be utilized to reduce the need for manual labeling. Our SVS-net had better performance in accuracy, recall, F1 score, and Kappa score when compared to other well recognized models.
Background: Changes in choroidal thickness are associated with various ocular diseases and the choroid can be imaged using spectral-domain optical coherence tomography (SDOCT) and enhanced depth imaging OCT (EDIOCT). New Method: Eighty macular SDOCT volumes from 80 patients were obtained using the Zeiss Cirrus machine. Eleven additional control subjects had two Cirrus scans done in one visit along with EDIOCT using the Heidelberg Spectralis machine. To automatically segment choroidal layers from the OCT volumes, our graph-theoretic approach was utilized. The segmentation results were compared with reference standards from two graders, and the accuracy of automated segmentation was calculated using unsigned to signed border positioning thickness errors and Dice similarity coefficient (DSC). The repeatability and reproducibility of our choroidal thicknesses were determined by intraclass correlation coefficient (ICC), coefficient of variation (CV), and repeatability coefficient (RC). Results: The mean unsigned to signed border positioning errors for the choroidal inner and outer surfaces are 3.39plusminus1.26microns (mean plusminus SD) to minus1.52 plusminus 1.63microns and 16.09 plusminus 6.21microns to 4.73 plusminus 9.53microns, respectively. The mean unsigned to signed choroidal thickness errors are 16.54 plusminus 6.47microns to 6.25 plusminus 9.91microns, and the mean DSC is 0.949 plusminus 0.025. The ICC (95% CI), CV, RC values are 0.991 (0.977 to 0.997), 2.48%, 3.15microns for the repeatability and 0.991 (0.977 to 0.997), 2.49%, 0.53microns for the reproducibility studies, respectively. Comparison with Existing Method(s): The proposed method outperformed our previous method using choroidal vessel segmentation and inter-grader variability. Conclusions: This automated segmentation method can reliably measure choroidal thickness using different OCT platforms.
Vessel stenosis is a major risk factor in cardiovascular diseases (CVD). To analyze the degree of vessel stenosis for supporting the treatment management, extraction of coronary artery area from Computed Tomographic Angiography (CTA) is regarded as a key procedure. However, manual segmentation by cardiologists may be a time-consuming task, and present a significant inter-observer variation. Although various computer-aided approaches have been developed to support segmentation of coronary arteries in CTA, the results remain unreliable due to complex attenuation appearance of plaques, which are the cause of the stenosis. To overcome the difficulties caused by attenuation ambiguity, in this paper, a 3D multi-channel U-Net architecture is proposed for fully automatic 3D coronary artery reconstruction from CTA. Other than using the original CTA image, the main idea of the proposed approach is to incorporate the vesselness map into the input of the U-Net, which serves as the reinforcing information to highlight the tubular structure of coronary arteries. The experimental results show that the proposed approach could achieve a Dice Similarity Coefficient (DSC) of 0.8 in comparison to around 0.6 attained by previous CNN approaches.
Retinal artery/vein (A/V) classification lays the foundation for the quantitative analysis of retinal vessels, which is associated with potential risks of various cardiovascular and cerebral diseases. The topological connection relationship, which has been proved effective in improving the A/V classification performance for the conventional graph based method, has not been exploited by the deep learning based method. In this paper, we propose a Topology Ranking Generative Adversarial Network (TR-GAN) to improve the topology connectivity of the segmented arteries and veins, and further to boost the A/V classification performance. A topology ranking discriminator based on ordinal regression is proposed to rank the topological connectivity level of the ground-truth, the generated A/V mask and the intentionally shuffled mask. The ranking loss is further back-propagated to the generator to generate better connected A/V masks. In addition, a topology preserving module with triplet loss is also proposed to extract the high-level topological features and further to narrow the feature distance between the predicted A/V mask and the ground-truth. The proposed framework effectively increases the topological connectivity of the predicted A/V masks and achieves state-of-the-art A/V classification performance on the publicly available AV-DRIVE dataset.
Optical coherence tomography angiography (OCTA) performs non-invasive visualization and characterization of microvasculature in research and clinical applications mainly in ophthalmology and dermatology. A wide variety of instruments, imaging protocols, processing methods and metrics have been used to describe the microvasculature, such that comparing different study outcomes is currently not feasible. With the goal of contributing to standardization of OCTA data analysis, we report a user-friendly, open-source toolbox, OCTAVA (OCTA Vascular Analyzer), to automate the pre-processing, segmentation, and quantitative analysis of en face OCTA maximum intensity projection images in a standardized workflow. We present each analysis step, including optimization of filtering and choice of segmentation algorithm, and definition of metrics. We perform quantitative analysis of OCTA images from different commercial and non-commercial instruments and samples and show OCTAVA can accurately and reproducibly determine metrics for characterization of microvasculature. Wide adoption could enable studies and aggregation of data on a scale sufficient to develop reliable microvascular biomarkers for early detection, and to guide treatment, of microvascular disease.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا