ترغب بنشر مسار تعليمي؟ اضغط هنا

Generative adversarial network based single pixel imaging

184   0   0.0 ( 0 )
 نشر من قبل Fengqiang Li
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Single pixel imaging can reconstruct two-dimensional images of a scene with only a single-pixel detector. It has been widely used for imaging in non-visible bandwidth (e.g., near-infrared and X-ray) where focal-plane array sensors are challenging to be manufactured. In this paper, we propose a generative adversarial network based reconstruction algorithm for single pixel imaging, which demonstrates efficient reconstruction in 10ms and higher quality. We verify the proposed method with both synthetic and real-world experiments, and demonstrate a good quality of reconstruction of a real-world plaster using a 0.05 sampling rate.

قيم البحث

اقرأ أيضاً

Single-pixel imaging is a novel imaging scheme that has gained popularity due to its huge computational gain and potential for a low-cost alternative to imaging beyond the visible spectrum. The traditional reconstruction methods struggle to produce a clear recovery when one limits the number of illumination patterns from a spatial light modulator. As a remedy, several deep-learning-based solutions have been proposed which lack good generalization ability due to the architectural setup and loss functions. In this paper, we propose a generative adversarial network-based reconstruction framework for single-pixel imaging, referred to as SPI-GAN. Our method can reconstruct images with 17.92 dB PSNR and 0.487 SSIM, even if the sampling ratio drops to 5%. This facilitates much faster reconstruction making our method suitable for single-pixel video. Furthermore, our ResNet-like architecture for the generator leads to useful representation learning that allows us to reconstruct completely unseen objects. The experimental results demonstrate that SPI-GAN achieves significant performance gain, e.g. near 3dB PSNR gain, over the current state-of-the-art method.
Two novel visual cryptography (VC) schemes are proposed by combining VC with single-pixel imaging (SPI) for the first time. It is pointed out that the overlapping of visual key images in VC is similar to the superposition of pixel intensities by a si ngle-pixel detector in SPI. In the first scheme, QR-code VC is designed by using opaque sheets instead of transparent sheets. The secret image can be recovered when identical illumination patterns are projected onto multiple visual key images and a single detector is used to record the total light intensities. In the second scheme, the secret image is shared by multiple illumination pattern sequences and it can be recovered when the visual key patterns are projected onto identical items. The application of VC can be extended to more diversified scenarios by our proposed schemes.
Among the major remaining challenges for single image super resolution (SISR) is the capacity to recover coherent images with global shapes and local details conforming to human vision system. Recent generative adversarial network (GAN) based SISR me thods have yielded overall realistic SR images, however, there are always unpleasant textures accompanied with structural distortions in local regions. To target these issues, we introduce the gradient branch into the generator to preserve structural information by restoring high-resolution gradient maps in SR process. In addition, we utilize a U-net based discriminator to consider both the whole image and the detailed per-pixel authenticity, which could encourage the generator to maintain overall coherence of the reconstructed images. Moreover, we have studied objective functions and LPIPS perceptual loss is added to generate more realistic and natural details. Experimental results show that our proposed method outperforms state-of-the-art perceptual-driven SR methods in perception index (PI), and obtains more geometrically consistent and visually pleasing textures in natural image restoration.
The paper proposes a method to effectively fuse multi-exposure inputs and generates high-quality high dynamic range (HDR) images with unpaired datasets. Deep learning-based HDR image generation methods rely heavily on paired datasets. The ground trut h provides information for the network getting HDR images without ghosting. Datasets without ground truth are hard to apply to train deep neural networks. Recently, Generative Adversarial Networks (GAN) have demonstrated their potentials of translating images from source domain X to target domain Y in the absence of paired examples. In this paper, we propose a GAN-based network for solving such problems while generating enjoyable HDR results, named UPHDR-GAN. The proposed method relaxes the constraint of paired dataset and learns the mapping from LDR domain to HDR domain. Although the pair data are missing, UPHDR-GAN can properly handle the ghosting artifacts caused by moving objects or misalignments with the help of modified GAN loss, improved discriminator network and useful initialization phase. The proposed method preserves the details of important regions and improves the total image perceptual quality. Qualitative and quantitative comparisons against other methods demonstrated the superiority of our method.
129 - X. Chen 2020
Traditional online map tiles, widely used on the Internet such as Google Map and Baidu Map, are rendered from vector data. Timely updating online map tiles from vector data, of which the generating is time-consuming, is a difficult mission. It is a s hortcut to generate map tiles in time from remote sensing images, which can be acquired timely without vector data. However, this mission used to be challenging or even impossible. Inspired by image-to-image translation (img2img) techniques based on generative adversarial networks (GAN), we proposed a semi-supervised Generation of styled map Tiles based on Generative Adversarial Network (SMAPGAN) model to generate styled map tiles directly from remote sensing images. In this model, we designed a semi-supervised learning strategy to pre-train SMAPGAN on rich unpaired samples and fine-tune it on limited paired samples in reality. We also designed image gradient L1 loss and image gradient structure loss to generate a styled map tile with global topological relationships and detailed edge curves of objects, which are important in cartography. Moreover, we proposed edge structural similarity index (ESSI) as a metric to evaluate the quality of topological consistency between generated map tiles and ground truths. Experimental results present that SMAPGAN outperforms state-of-the-art (SOTA) works according to mean squared error, structural similarity index, and ESSI. Also, SMAPGAN won more approval than SOTA in the human perceptual test on the visual realism of cartography. Our work shows that SMAPGAN is potentially a new paradigm to produce styled map tiles. Our implementation of the SMAPGAN is available at https://github.com/imcsq/SMAPGAN.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا