ﻻ يوجد ملخص باللغة العربية
Although deep convolutional neural networks (CNNs) have obtained outstanding performance in image superresolution (SR), their computational cost increases geometrically as CNN models get deeper and wider. Meanwhile, the features of intermediate layers are treated equally across the channel, thus hindering the representational capability of CNNs. In this paper, we propose an attention-aware linear depthwise network to address the problems for single image SR, named ALDNet. Specifically, linear depthwise convolution allows CNN-based SR models to preserve useful information for reconstructing a super-resolved image while reducing computational burden. Furthermore, we design an attention-aware branch that enhances the representation ability of depthwise convolution layers by making full use of depthwise filter interdependency. Experiments on publicly available benchmark datasets show that ALDNet achieves superior performance to traditional depthwise separable convolutions in terms of quantitative measurements and visual quality.
Deep Convolutional Neural Networks (CNN) have drawn great attention in image super-resolution (SR). Recently, visual attention mechanism, which exploits both of the feature importance and contextual cues, has been introduced to image SR and proves to
Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the recons
Recently, convolutional neural network (CNN) based image super-resolution (SR) methods have achieved significant performance improvement. However, most CNN-based methods mainly focus on feed-forward architecture design and neglect to explore the feed
Deep Convolutional Neural Networks (DCNNs) have achieved impressive performance in Single Image Super-Resolution (SISR). To further improve the performance, existing CNN-based methods generally focus on designing deeper architecture of the network. H
Light field (LF) cameras can record scenes from multiple perspectives, and thus introduce beneficial angular information for image super-resolution (SR). However, it is challenging to incorporate angular information due to disparities among LF images