ترغب بنشر مسار تعليمي؟ اضغط هنا

Single Image Super-resolution via Dense Blended Attention Generative Adversarial Network for Clinical Diagnosis

101   0   0.0 ( 0 )
 نشر من قبل Yuan Ma
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

During training phase, more connections (e.g. channel concatenation in last layer of DenseNet) means more occupied GPU memory and lower GPU utilization, requiring more training time. The increase of training time is also not conducive to launch application of SR algorithms. Thiss why we abandoned DenseNet as basic network. Futhermore, we abandoned this paper due to its limitation only applied on medical images. Please view our lastest work applied on general images at arXiv:1911.03464.

قيم البحث

اقرأ أيضاً

Among the major remaining challenges for single image super resolution (SISR) is the capacity to recover coherent images with global shapes and local details conforming to human vision system. Recent generative adversarial network (GAN) based SISR me thods have yielded overall realistic SR images, however, there are always unpleasant textures accompanied with structural distortions in local regions. To target these issues, we introduce the gradient branch into the generator to preserve structural information by restoring high-resolution gradient maps in SR process. In addition, we utilize a U-net based discriminator to consider both the whole image and the detailed per-pixel authenticity, which could encourage the generator to maintain overall coherence of the reconstructed images. Moreover, we have studied objective functions and LPIPS perceptual loss is added to generate more realistic and natural details. Experimental results show that our proposed method outperforms state-of-the-art perceptual-driven SR methods in perception index (PI), and obtains more geometrically consistent and visually pleasing textures in natural image restoration.
Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the recons truction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.
The presence of residual and dense neural networks which greatly promotes the development of image Super-Resolution(SR) have witnessed a lot of impressive results. Depending on our observation, although more layers and connections could always improv e performance, the increase of model parameters is not conducive to launch application of SR algorithms. Furthermore, algorithms supervised by L1/L2 loss can achieve considerable performance on traditional metrics such as PSNR and SSIM, yet resulting in blurry and over-smoothed outputs without sufficient high-frequency details, namely low perceptual index(PI). Regarding the issues, this paper develops a perception-oriented single image SR algorithm via dual relativistic average generative adversarial networks. In the generator part, a novel residual channel attention block is proposed to recalibrate significance of specific channels, further increasing feature expression capabilities. Parameters of convolutional layers within each block are shared to expand receptive fields while maintain the amount of tunable parameters unchanged. The feature maps are subsampled using sub-pixel convolution to obtain reconstructed high-resolution images. The discriminator part consists of two relativistic average discriminators that work in pixel domain and feature domain, respectively, fully exploiting the prior that half of data in a mini-batch are fake. Different weighted combinations of perceptual loss and adversarial loss are utilized to supervise the generator to equilibrate perceptual quality and objective results. Experimental results and ablation studies show that our proposed algorithm can rival state-of-the-art SR algorithms, both perceptually(PI-minimization) and objectively(PSNR-maximization) with fewer parameters.
Deep Convolutional Neural Networks (DCNNs) have achieved impressive performance in Single Image Super-Resolution (SISR). To further improve the performance, existing CNN-based methods generally focus on designing deeper architecture of the network. H owever, we argue blindly increasing networks depth is not the most sensible way. In this paper, we propose a novel end-to-end Residual Neuron Attention Networks (RNAN) for more efficient and effective SISR. Structurally, our RNAN is a sequential integration of the well-designed Global Context-enhanced Residual Groups (GCRGs), which extracts super-resolved features from coarse to fine. Our GCRG is designed with two novelties. Firstly, the Residual Neuron Attention (RNA) mechanism is proposed in each block of GCRG to reveal the relevance of neurons for better feature representation. Furthermore, the Global Context (GC) block is embedded into RNAN at the end of each GCRG for effectively modeling the global contextual information. Experiments results demonstrate that our RNAN achieves the comparable results with state-of-the-art methods in terms of both quantitative metrics and visual quality, however, with simplified network architecture.
220 - Huapeng Wu , Jie Gui , Jun Zhang 2021
Recently, deep convolutional neural network methods have achieved an excellent performance in image superresolution (SR), but they can not be easily applied to embedded devices due to large memory cost. To solve this problem, we propose a pyramidal d ense attention network (PDAN) for lightweight image super-resolution in this paper. In our method, the proposed pyramidal dense learning can gradually increase the width of the densely connected layer inside a pyramidal dense block to extract deep features efficiently. Meanwhile, the adaptive group convolution that the number of groups grows linearly with dense convolutional layers is introduced to relieve the parameter explosion. Besides, we also present a novel joint attention to capture cross-dimension interaction between the spatial dimensions and channel dimension in an efficient way for providing rich discriminative feature representations. Extensive experimental results show that our method achieves superior performance in comparison with the state-of-the-art lightweight SR methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا