No Arabic abstract
Wide dynamic range (WDR) image tone mapping is in high demand in many applications like film production, security monitoring, and photography. It is especially crucial for mobile devices because most of the images taken today are from mobile phones, hence such technology is highly demanded in the consumer market of mobile devices and is essential for a good customer experience. However, high-quality and high-performance WDR image tone mapping implementations are rarely found in the mobile-end. In this paper, we introduce a high performance, mobile-end WDR image tone mapping implementation. It leverages the tone mapping results of multiple receptive fields and calculates a suitable value for each pixel. The utilization of integral image and integral histogram significantly reduce the required computation. Moreover, GPU parallel computation is used to increase the processing speed. The experimental results indicate that our implementation can process a high-resolution WDR image within a second on mobile devices and produce appealing image quality.
In this paper, we present a novel tone mapping algorithm that can be used for displaying wide dynamic range (WDR) images on low dynamic range (LDR) devices. The proposed algorithm is mainly motivated by the logarithmic response and local adaptation features of the human visual system (HVS). HVS perceives luminance differently when under different adaptation levels, and therefore our algorithm uses functions built upon different scales to tone map pixels to different values. Functions of large scales are used to maintain image brightness consistency and functions of small scales are used to preserve local detail and contrast. An efficient method using local variance has been proposed to fuse the values of different scales and to remove artifacts. The algorithm utilizes integral images and integral histograms to reduce computation complexity and processing time. Experimental results show that the proposed algorithm can generate high brightness, good contrast, and appealing images that surpass the performance of many state-of-the-art tone mapping algorithms. This project is available at https://github.com/jieyang1987/ToneMapping-Based-on-Multi-scale-Histogram-Synthesis.
The dynamic range of our normal life can exceeds 120 dB, however, the smart-phone cameras and the conventional digital cameras can only capture a dynamic range of 90 dB, which sometimes leads to loss of details for the recorded image. Now, some professional hardware applications and image fusion algorithms have been devised to take wide dynamic range (WDR), but unfortunately existing devices cannot display WDR image. Tone mapping (TM) thus becomes an essential step for exhibiting WDR image on our ordinary screens, which convert the WDR image into low dynamic range (LDR) image. More and more researchers are focusing on this topic, and give their efforts to design an excellent tone mapping operator (TMO), showing detailed images as the same as the perception that human eyes could receive. Therefore, it is important for us to know the history, development, and trend of TM before proposing a practicable TMO. In this paper, we present a comprehensive study of the most well-known TMOs, which divides TMOs into traditional and machine learning-based category.
Wide dynamic range (WDR) images contain more scene details and contrast when compared to common images. However, it requires tone mapping to process the pixel values in order to display properly. The details of WDR images can diminish during the tone mapping process. In this work, we address the problem by combining a novel reformulated Laplacian pyramid and deep learning. The reformulated Laplacian pyramid always decompose a WDR image into two frequency bands where the low-frequency band is global feature-oriented, and the high-frequency band is local feature-oriented. The reformulation preserves the local features in its original resolution and condenses the global features into a low-resolution image. The generated frequency bands are reconstructed and fine-tuned to output the final tone mapped image that can display on the screen with minimum detail and contrast loss. The experimental results demonstrate that the proposed method outperforms state-of-the-art WDR image tone mapping methods. The code is made publicly available at https://github.com/linmc86/Deep-Reformulated-Laplacian-Tone-Mapping.
Single-image HDR reconstruction or inverse tone mapping (iTM) is a challenging task. In particular, recovering information in over-exposed regions is extremely difficult because details in such regions are almost completely lost. In this paper, we present a deep learning based iTM method that takes advantage of the feature extraction and mapping power of deep convolutional neural networks (CNNs) and uses a lightness prior to modulate the CNN to better exploit observations in the surrounding areas of the over-exposed regions to enhance the quality of HDR image reconstruction. Specifically, we introduce a Hierarchical Synthesis Network (HiSN) for inferring a HDR image from a LDR input and a Lightness Adpative Modulation Network (LAMN) to incorporate the the lightness prior knowledge in the inferring process. The HiSN hierarchically synthesizes the high-brightness component and the low-brightness component of the HDR image whilst the LAMN uses a lightness adaptive mask that separates detail-less saturated bright pixels from well-exposed lower light pixels to enable HiSN to better infer the missing information, particularly in the difficult over-exposed detail-less areas. We present experimental results to demonstrate the effectiveness of the new technique based on quantitative measures and visual comparisons. In addition, we present ablation studies of HiSN and visualization of the activation maps inside LAMN to help gain a deeper understanding of the internal working of the new iTM algorithm and explain why it can achieve much improved performance over state-of-the-art algorithms.
Reconstructing RGB image from RAW data obtained with a mobile device is related to a number of image signal processing (ISP) tasks, such as demosaicing, denoising, etc. Deep neural networks have shown promising results over hand-crafted ISP algorithms on solving these tasks separately, or even replacing the whole reconstruction process with one model. Here, we propose PyNET-CA, an end-to-end mobile ISP deep learning algorithm for RAW to RGB reconstruction. The model enhances PyNET, a recently proposed state-of-the-art model for mobile ISP, and improve its performance with channel attention and subpixel reconstruction module. We demonstrate the performance of the proposed method with comparative experiments and results from the AIM 2020 learned smartphone ISP challenge. The source code of our implementation is available at https://github.com/egyptdj/skyb-aim2020-public