Do you want to publish a course? Click here

not-so-BigGAN: Generating High-Fidelity Images on Small Compute with Wavelet-based Super-Resolution

224   0   0.0 ( 0 )
 Added by Akash Srivastava
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

State-of-the-art models for high-resolution image generation, such as BigGAN and VQVAE-2, require an incredible amount of compute resources and/or time (512 TPU-v3 cores) to train, putting them out of reach for the larger research community. On the other hand, GAN-based image super-resolution models, such as ESRGAN, can not only upscale images to high dimensions, but also are efficient to train. In this paper, we present not-so-big-GAN (nsb-GAN), a simple yet cost-effective two-step training framework for deep generative models (DGMs) of high-dimensional natural images. First, we generate images in low-frequency bands by training a sampler in the wavelet domain. Then, we super-resolve these images from the wavelet domain back to the pixel-space with our novel wavelet super-resolution decoder network. Wavelet-based down-sampling method preserves more structural information than pixel-based methods, leading to significantly better generative quality of the low-resolution sampler (e.g., 64x64). Since the sampler and decoder can be trained in parallel and operate on much lower dimensional spaces than end-to-end models, the training cost is substantially reduced. On ImageNet 512x512, our model achieves a Frechet Inception Distance (FID) of 10.59 -- beating the baseline BigGAN model -- at half the compute (256 TPU-v3 cores).



rate research

Read More

High resolution Digital Elevation Models(DEMs) are an important requirement for many applications like modelling water flow, landslides, avalanches etc. Yet publicly available DEMs have low resolution for most parts of the world. Despite tremendous success in image super resolution task using deep learning solutions, there are very few works that have used these powerful systems on DEMs to generate HRDEMs. Motivated from feedback neural networks, we propose a novel neural network architecture that learns to add high frequency details iteratively to low resolution DEM, turning it into a high resolution DEM without compromising its fidelity. Our experiments confirm that without any additional modality such as aerial images(RGB), our network DSRFB achieves RMSEs of 0.59 to 1.27 across 4 different datasets.
We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, making our model an attractive candidate for applications where the encoding and/or decoding speed is critical. Additionally, VQ-VAE requires sampling an autoregressive model only in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GANs known shortcomings such as mode collapse and lack of diversity.
Fluorescence microscopy has enabled a dramatic development in modern biology by visualizing biological organisms with micrometer scale resolution. However, due to the diffraction limit, sub-micron/nanometer features are difficult to resolve. While various super-resolution techniques are developed to achieve nanometer-scale resolution, they often either require expensive optical setup or specialized fluorophores. In recent years, deep learning has shown the potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images. For accurate results, conventional deep learning techniques require thousands of images as a training dataset. Obtaining large datasets from biological samples is not often feasible due to the photobleaching of fluorophores, phototoxicity, and dynamic processes occurring within the organism. Therefore, achieving deep learning-based super-resolution using small datasets is challenging. We address this limitation with a new convolutional neural network-based approach that is successfully trained with small datasets and achieves super-resolution images. We captured 750 images in total from 15 different field-of-views as the training dataset to demonstrate the technique. In each FOV, a single target image is generated using the super-resolution radial fluctuation method. As expected, this small dataset failed to produce a usable model using traditional super-resolution architecture. However, using the new approach, a network can be trained to achieve super-resolution images from this small dataset. This deep learning model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
Change detection, which aims to distinguish surface changes based on bi-temporal images, plays a vital role in ecological protection and urban planning. Since high resolution (HR) images cannot be typically acquired continuously over time, bi-temporal images with different resolutions are often adopted for change detection in practical applications. Traditional subpixel-based methods for change detection using images with different resolutions may lead to substantial error accumulation when HR images are employed; this is because of intraclass heterogeneity and interclass similarity. Therefore, it is necessary to develop a novel method for change detection using images with different resolutions, that is more suitable for HR images. To this end, we propose a super-resolution-based change detection network (SRCDNet) with a stacked attention module. The SRCDNet employs a super resolution (SR) module containing a generator and a discriminator to directly learn SR images through adversarial learning and overcome the resolution difference between bi-temporal images. To enhance the useful information in multi-scale features, a stacked attention module consisting of five convolutional block attention modules (CBAMs) is integrated to the feature extractor. The final change map is obtained through a metric learning-based change decision module, wherein a distance map between bi-temporal features is calculated. The experimental results demonstrate the superiority of the proposed method, which not only outperforms all baselines -with the highest F1 scores of 87.40% on the building change detection dataset and 92.94% on the change detection dataset -but also obtains the best accuracies on experiments performed with images having a 4x and 8x resolution difference. The source code of SRCDNet will be available at https://github.com/liumency/SRCDNet.
95 - Kai Sun 2021
Background. Digital pathology has aroused widespread interest in modern pathology. The key of digitalization is to scan the whole slide image (WSI) at high magnification. The lager the magnification is, the richer details WSI will provide, but the scanning time is longer and the file size of obtained is larger. Methods. We design a strategy to scan slides with low resolution (5X) and a super-resolution method is proposed to restore the image details when in diagnosis. The method is based on a multi-scale generative adversarial network, which sequentially generates three high-resolution images such as 10X, 20X and 40X. Results. The peak-signal-to-noise-ratio of 10X to 40X generated images are 24.16, 22.27 and 20.44, and the structural-similarity-index are 0.845, 0.680 and 0.512, which are better than other super-resolution networks. Visual scoring average and standard deviation from three pathologists is 3.63 plus-minus 0.52, 3.70 plus-minus 0.57 and 3.74 plus-minus 0.56 and the p value of analysis of variance is 0.37, indicating that generated images include sufficient information for diagnosis. The average value of Kappa test is 0.99, meaning the diagnosis of generated images is highly consistent with that of the real images. Conclusion. This proposed method can generate high-quality 10X, 20X, 40X images from 5X images at the same time, in which the time and storage costs of digitalization can be effectively reduced up to 1/64 of the previous costs. The proposed method provides a better alternative for low-cost storage, faster image share of digital pathology. Keywords. Digital pathology; Super-resolution; Low resolution scanning; Low cost
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا