ترغب بنشر مسار تعليمي؟ اضغط هنا

Scalable Deep Compressive Sensing

138   0   0.0 ( 0 )
 نشر من قبل Zhonghao Zhang
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Deep learning has been used to image compressive sensing (CS) for enhanced reconstruction performance. However, most existing deep learning methods train different models for different subsampling ratios, which brings additional hardware burden. In this paper, we develop a general framework named scalable deep compressive sensing (SDCS) for the scalable sampling and reconstruction (SSR) of all existing end-to-end-trained models. In the proposed way, images are measured and initialized linearly. Two sampling masks are introduced to flexibly control the subsampling ratios used in sampling and reconstruction, respectively. To make the reconstruction model adapt to any subsampling ratio, a training strategy dubbed scalable training is developed. In scalable training, the model is trained with the sampling matrix and the initialization matrix at various subsampling ratios by integrating different sampling matrix masks. Experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance, and SDCS outperforms other SSR methods.

قيم البحث

اقرأ أيضاً

To capture high-speed videos using a two-dimensional detector, video snapshot compressive imaging (SCI) is a promising system, where the video frames are coded by different masks and then compressed to a snapshot measurement. Following this, efficien t algorithms are desired to reconstruct the high-speed frames, where the state-of-the-art results are achieved by deep learning networks. However, these networks are usually trained for specific small-scale masks and often have high demands of training time and GPU memory, which are hence {bf em not flexible} to $i$) a new mask with the same size and $ii$) a larger-scale mask. We address these challenges by developing a Meta Modulated Convolutional Network for SCI reconstruction, dubbed MetaSCI. MetaSCI is composed of a shared backbone for different masks, and light-weight meta-modulation parameters to evolve to different modulation parameters for each mask, thus having the properties of {bf em fast adaptation} to new masks (or systems) and ready to {bf em scale to large data}. Extensive simulation and real data results demonstrate the superior performance of our proposed approach. Our code is available at {smallurl{https://github.com/xyvirtualgroup/MetaSCI-CVPR2021}}.
Exploiting intrinsic structures in sparse signals underpins the recent progress in compressive sensing (CS). The key for exploiting such structures is to achieve two desirable properties: generality (ie, the ability to fit a wide range of signals wit h diverse structures) and adaptability (ie, being adaptive to a specific signal). Most existing approaches, however, often only achieve one of these two properties. In this study, we propose a novel adaptive Markov random field sparsity prior for CS, which not only is able to capture a broad range of sparsity structures, but also can adapt to each sparse signal through refining the parameters of the sparsity prior with respect to the compressed measurements. To maximize the adaptability, we also propose a new sparse signal estimation where the sparse signals, support, noise and signal parameter estimation are unified into a variational optimization problem, which can be effectively solved with an alternative minimization scheme. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed method in recovery accuracy, noise tolerance, and runtime.
Most deep network methods for compressive sensing reconstruction suffer from the black-box characteristic of DNN. In this paper, a deep neural network with interpretable motion estimation named CSMCNet is proposed. The network is able to realize high -quality reconstruction of video compressive sensing by unfolding the iterative steps of optimization based algorithms. A DNN based, multi-hypothesis motion estimation module is designed to improve the reconstruction quality, and a residual module is employed to further narrow down the gap between re-construction results and original signal in our proposed method. Besides, we propose an interpolation module with corresponding training strategy to realize scalable CS reconstruction, which is capable of using the same model to decode various compression ratios. Experiments show that a PSNR of 29.34dB can be achieved at 2% CS ratio (compressed by 98%), which is superior than other state-of-the-art methods. Moreover, the interpolation module is proved to be effective, with significant cost saving and acceptable performance losses.
Classification and identification of the materials lying over or beneath the Earths surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS) and have garnered a growing concern owing to the recent adva ncements of deep learning techniques. Although deep networks have been successfully applied in single-modality-dominated classification tasks, yet their performance inevitably meets the bottleneck in complex scenes that need to be finely classified, due to the limitation of information diversity. In this work, we provide a baseline solution to the aforementioned difficulty by developing a general multimodal deep learning (MDL) framework. In particular, we also investigate a special case of multi-modality learning (MML) -- cross-modality learning (CML) that exists widely in RS image classification applications. By focusing on what, where, and how to fuse, we show different fusion strategies as well as how to train deep networks and build the network architecture. Specifically, five fusion architectures are introduced and developed, further being unified in our MDL framework. More significantly, our framework is not only limited to pixel-wise classification tasks but also applicable to spatial information modeling with convolutional neural networks (CNNs). To validate the effectiveness and superiority of the MDL framework, extensive experiments related to the settings of MML and CML are conducted on two different multimodal RS datasets. Furthermore, the codes and datasets will be available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing to the RS community.
We apply reinforcement learning to video compressive sensing to adapt the compression ratio. Specifically, video snapshot compressive imaging (SCI), which captures high-speed video using a low-speed camera is considered in this work, in which multipl e (B) video frames can be reconstructed from a snapshot measurement. One research gap in previous studies is how to adapt B in the video SCI system for different scenes. In this paper, we fill this gap utilizing reinforcement learning (RL). An RL model, as well as various convolutional neural networks for reconstruction, are learned to achieve adaptive sensing of video SCI systems. Furthermore, the performance of an object detection network using directly the video SCI measurements without reconstruction is also used to perform RL-based adaptive video compressive sensing. Our proposed adaptive SCI method can thus be implemented in low cost and real time. Our work takes the technology one step further towards real applications of video SCI.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا