ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Structral coherence Via Generative Adversarial Network for Single Image Super-Resolution

154   0   0.0 ( 0 )
 نشر من قبل Yuanzhuo Li
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Among the major remaining challenges for single image super resolution (SISR) is the capacity to recover coherent images with global shapes and local details conforming to human vision system. Recent generative adversarial network (GAN) based SISR methods have yielded overall realistic SR images, however, there are always unpleasant textures accompanied with structural distortions in local regions. To target these issues, we introduce the gradient branch into the generator to preserve structural information by restoring high-resolution gradient maps in SR process. In addition, we utilize a U-net based discriminator to consider both the whole image and the detailed per-pixel authenticity, which could encourage the generator to maintain overall coherence of the reconstructed images. Moreover, we have studied objective functions and LPIPS perceptual loss is added to generate more realistic and natural details. Experimental results show that our proposed method outperforms state-of-the-art perceptual-driven SR methods in perception index (PI), and obtains more geometrically consistent and visually pleasing textures in natural image restoration.



قيم البحث

اقرأ أيضاً

During training phase, more connections (e.g. channel concatenation in last layer of DenseNet) means more occupied GPU memory and lower GPU utilization, requiring more training time. The increase of training time is also not conducive to launch appli cation of SR algorithms. Thiss why we abandoned DenseNet as basic network. Futhermore, we abandoned this paper due to its limitation only applied on medical images. Please view our lastest work applied on general images at arXiv:1911.03464.
The presence of residual and dense neural networks which greatly promotes the development of image Super-Resolution(SR) have witnessed a lot of impressive results. Depending on our observation, although more layers and connections could always improv e performance, the increase of model parameters is not conducive to launch application of SR algorithms. Furthermore, algorithms supervised by L1/L2 loss can achieve considerable performance on traditional metrics such as PSNR and SSIM, yet resulting in blurry and over-smoothed outputs without sufficient high-frequency details, namely low perceptual index(PI). Regarding the issues, this paper develops a perception-oriented single image SR algorithm via dual relativistic average generative adversarial networks. In the generator part, a novel residual channel attention block is proposed to recalibrate significance of specific channels, further increasing feature expression capabilities. Parameters of convolutional layers within each block are shared to expand receptive fields while maintain the amount of tunable parameters unchanged. The feature maps are subsampled using sub-pixel convolution to obtain reconstructed high-resolution images. The discriminator part consists of two relativistic average discriminators that work in pixel domain and feature domain, respectively, fully exploiting the prior that half of data in a mini-batch are fake. Different weighted combinations of perceptual loss and adversarial loss are utilized to supervise the generator to equilibrate perceptual quality and objective results. Experimental results and ablation studies show that our proposed algorithm can rival state-of-the-art SR algorithms, both perceptually(PI-minimization) and objectively(PSNR-maximization) with fewer parameters.
High-resolution magnetic resonance images can provide fine-grained anatomical information, but acquiring such data requires a long scanning time. In this paper, a framework called the Fused Attentive Generative Adversarial Networks(FA-GAN) is propose d to generate the super-resolution MR image from low-resolution magnetic resonance images, which can reduce the scanning time effectively but with high resolution MR images. In the framework of the FA-GAN, the local fusion feature block, consisting of different three-pass networks by using different convolution kernels, is proposed to extract image features at different scales. And the global feature fusion module, including the channel attention module, the self-attention module, and the fusion operation, is designed to enhance the important features of the MR image. Moreover, the spectral normalization process is introduced to make the discriminator network stable. 40 sets of 3D magnetic resonance images (each set of images contains 256 slices) are used to train the network, and 10 sets of images are used to test the proposed method. The experimental results show that the PSNR and SSIM values of the super-resolution magnetic resonance image generated by the proposed FA-GAN method are higher than the state-of-the-art reconstruction methods.
Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the recons truction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.
Recently, deep convolutional neural networks (CNNs) have been widely explored in single image super-resolution (SISR) and contribute remarkable progress. However, most of the existing CNNs-based SISR methods do not adequately explore contextual infor mation in the feature extraction stage and pay little attention to the final high-resolution (HR) image reconstruction step, hence hindering the desired SR performance. To address the above two issues, in this paper, we propose a two-stage attentive network (TSAN) for accurate SISR in a coarse-to-fine manner. Specifically, we design a novel multi-context attentive block (MCAB) to make the network focus on more informative contextual features. Moreover, we present an essential refined attention block (RAB) which could explore useful cues in HR space for reconstructing fine-detailed HR image. Extensive evaluations on four benchmark datasets demonstrate the efficacy of our proposed TSAN in terms of quantitative metrics and visual effects. Code is available at https://github.com/Jee-King/TSAN.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا