ﻻ يوجد ملخص باللغة العربية
Retinal lesions play a vital role in the accurate classification of retinal abnormalities. Many researchers have proposed deep lesion-aware screening systems that analyze and grade the progression of retinopathy. However, to the best of our knowledge, no literature exploits the tendency of these systems to generalize across multiple scanner specifications and multi-modal imagery. Towards this end, this paper presents a detailed evaluation of semantic segmentation, scene parsing and hybrid deep learning systems for extracting the retinal lesions such as intra-retinal fluid, sub-retinal fluid, hard exudates, drusen, and other chorioretinal anomalies from fused fundus and optical coherence tomography (OCT) imagery. Furthermore, we present a novel strategy exploiting the transferability of these models across multiple retinal scanner specifications. A total of 363 fundus and 173,915 OCT scans from seven publicly available datasets were used in this research (from which 297 fundus and 59,593 OCT scans were used for testing purposes). Overall, a hybrid retinal analysis and grading network (RAGNet), backboned through ResNet-50, stood first for extracting the retinal lesions, achieving a mean dice coefficient score of 0.822. Moreover, the complete source code and its documentation are released at: http://biomisa.org/index.php/downloads/.
The automatic diagnosis of various retinal diseases from fundus images is important to support clinical decision-making. However, developing such automatic solutions is challenging due to the requirement of a large amount of human-annotated data. Rec
Medical image captioning automatically generates a medical description to describe the content of a given medical image. A traditional medical image captioning model creates a medical description only based on a single medical image input. Hence, an
Urban water is important for the urban ecosystem. Accurate and efficient detection of urban water with remote sensing data is of great significance for urban management and planning. In this paper, we proposed a new method to combine Google Earth Eng
Kinship verification is a long-standing research challenge in computer vision. The visual differences presented to the face have a significant effect on the recognition capabilities of the kinship systems. We argue that aggregating multiple visual kn
Image annotation aims to annotate a given image with a variable number of class labels corresponding to diverse visual concepts. In this paper, we address two main issues in large-scale image annotation: 1) how to learn a rich feature representation