ترغب بنشر مسار تعليمي؟ اضغط هنا

A Deep Retinal Image Quality Assessment Network with Salient Structure Priors

77   0   0.0 ( 0 )
 نشر من قبل Ziwen Xu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Retinal image quality assessment is an essential prerequisite for diagnosis of retinal diseases. Its goal is to identify retinal images in which anatomic structures and lesions attracting ophthalmologists attention most are exhibited clearly and definitely while reject poor quality fundus images. Motivated by this, we mimic the way that ophthalmologists assess the quality of retinal images and propose a method termed SalStructuIQA. First, two salient structures for automated retinal quality assessment. One is the large-size salient structures including optic disc region and exudates in large-size. The other is the tiny-size salient structures which mainly include vessels. Then we incorporate the proposed two salient structure priors with deep convolutional neural network (CNN) to shift the focus of CNN to salient structures. Accordingly, we develop two CNN architectures: Dual-branch SalStructIQA and Single-branch SalStructIQA. Dual-branch SalStructIQA contains two CNN branches and one is guided by large-size salient structures while the other is guided by tiny-size salient structures. Single-branch SalStructIQA contains one CNN branch, which is guided by the concatenation of salient structures in both large-size and tiny-size. Experimental results on Eye-Quality dataset show that our proposed Dual-branch SalStructIQA outperforms the state-of-the-art methods for retinal image quality assessment and Single-branch SalStructIQA is much light-weight comparing with state-of-the-art deep retinal image quality assessment methods and still achieves competitive performances.



قيم البحث

اقرأ أيضاً

114 - Ziwen Xu , Beiji Zou , Qing Liu 2020
Retinal image quality assessment is an essential task in the diagnosis of retinal diseases. Recently, there are emerging deep models to grade quality of retinal images. Current state-of-the-arts either directly transfer classification networks origin ally designed for natural images to quality classification of retinal images or introduce extra image quality priors via multiple CNN branches or independent CNNs. This paper proposes a dark and bright channel prior guided deep network for retinal image quality assessment called GuidedNet. Specifically, the dark and bright channel priors are embedded into the start layer of network to improve the discriminate ability of deep features. In addition, we re-annotate a new retinal image quality dataset called RIQA-RFMiD for further validation. Experimental results on a public retinal image quality dataset Eye-Quality and our re-annotated dataset RIQA-RFMiD demonstrate the effectiveness of the proposed GuidedNet.
106 - Baoyun Peng , Min Liu , Heng Yang 2021
Face recognition has made significant progress in recent years due to deep convolutional neural networks (CNN). In many face recognition (FR) scenarios, face images are acquired from a sequence with huge intra-variations. These intra-variations, whic h are mainly affected by the low-quality face images, cause instability of recognition performance. Previous works have focused on ad-hoc methods to select frames from a video or use face image quality assessment (FIQA) methods, which consider only a particular or combination of several distortions. In this work, we present an efficient non-reference image quality assessment for FR that directly links image quality assessment (IQA) and FR. More specifically, we propose a new measurement to evaluate image quality without any reference. Based on the proposed quality measurement, we propose a deep Tiny Face Quality network (tinyFQnet) to learn a quality prediction function from data. We evaluate the proposed method for different powerful FR models on two classical video-based (or template-based) benchmark: IJB-B and YTF. Extensive experiments show that, although the tinyFQnet is much smaller than the others, the proposed method outperforms state-of-the-art quality assessment methods in terms of effectiveness and efficiency.
Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems. Existing RIQA methods focus on the RGB color-space an d are developed based on small datasets with binary quality labels (i.e., `Accept and `Reject). In this paper, we first re-annotate an Eye-Quality (EyeQ) dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., `Good, `Usable and `Reject) for evaluating RIQA methods. Our RIQA dataset is characterized by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net), which integrates the different color-space representations at both a feature-level and prediction-level to predict image quality grades. Experiments on our EyeQ dataset show that our MCF-Net obtains a state-of-the-art performance, outperforming the other deep learning methods. Furthermore, we also evaluate diabetic retinopathy (DR) detection methods on images of different quality, and demonstrate that the performances of automated diagnostic systems are highly dependent on image quality.
In this paper, we propose an image quality transformer (IQT) that successfully applies a transformer architecture to a perceptual full-reference image quality assessment (IQA) task. Perceptual representation becomes more important in image quality as sessment. In this context, we extract the perceptual feature representations from each of input images using a convolutional neural network (CNN) backbone. The extracted feature maps are fed into the transformer encoder and decoder in order to compare a reference and distorted images. Following an approach of the transformer-based vision models, we use extra learnable quality embedding and position embedding. The output of the transformer is passed to a prediction head in order to predict a final quality score. The experimental results show that our proposed model has an outstanding performance for the standard IQA datasets. For a large-scale IQA dataset containing output images of generative model, our model also shows the promising results. The proposed IQT was ranked first among 13 participants in the NTIRE 2021 perceptual image quality assessment challenge. Our work will be an opportunity to further expand the approach for the perceptual IQA task.
Retinal image segmentation plays an important role in automatic disease diagnosis. This task is very challenging because the complex structure and texture information are mixed in a retinal image, and distinguishing the information is difficult. Exis ting methods handle texture and structure jointly, which may lead biased models toward recognizing textures and thus results in inferior segmentation performance. To address it, we propose a segmentation strategy that seeks to separate structure and texture components and significantly improve the performance. To this end, we design a structure-texture demixing network (STD-Net) that can process structures and textures differently and better. Extensive experiments on two retinal image segmentation tasks (i.e., blood vessel segmentation, optic disc and cup segmentation) demonstrate the effectiveness of the proposed method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا