No Arabic abstract
Compared with conventional image and video, light field images introduce the weight channel, as well as the visual consistency of rendered view, information that has to be taken into account when compressing the pseudo-temporal-sequence (PTS) created from light field images. In this paper, we propose a novel frame level bit allocation framework for PTS coding. A joint model that measures weighted distortion and visual consistency, combined with an iterative encoding system, yields the optimal bit allocation for each frame by solving a convex optimization problem. Experimental results show that the proposed framework is effective in producing desired distortion distribution based on weights, and achieves up to 24.7% BD-rate reduction comparing to the default rate control algorithm.
The quality assessment of light field images presents new challenges to conventional compression methods, as the spatial quality is affected by the optical distortion of capturing devices, and the angular consistency affects the performance of dynamic rendering applications. In this paper, we propose a two-pass encoding system for pseudo-temporal sequence based light field image compression with a novel frame level bit allocation framework that optimizes spatial quality and angular consistency simultaneously. Frame level rate-distortion models are estimated during the first pass, and the second pass performs the actual encoding with optimized bit allocations given by a two-step convex programming. The proposed framework supports various encoder configurations. Experimental results show that comparing to the anchor HM 16.16 (HEVC reference software), the proposed two-pass encoding system on average achieves 11.2% to 11.9% BD-rate reductions for the all-intra configuration, 15.8% to 32.7% BD-rate reductions for the random-access configuration, and 12.1% to 15.7% BD-rate reductions for the low-delay configuration. The resulting bit errors are limited, and the total time cost is less than twice of the one-pass anchor. Comparing with our earlier low-delay configuration based method, the proposed system improves BD-rate reduction by 3.1% to 8.3%, reduces the bit errors by more than 60%, and achieves more than 12x speed up.
Reversible data hiding in encrypted images (RDHEI) receives growing attention because it protects the content of the original image while the embedded data can be accurately extracted and the original image can be reconstructed lossless. To make full use of the correlation of the adjacent pixels, this paper proposes an RDHEI scheme based on pixel prediction and bit-plane compression. Firstly, to vacate room for data embedding, the prediction error of the original image is calculated and used for bit-plane rearrangement and compression. Then, the image after vacating room is encrypted by a stream cipher. Finally, the additional data is embedded in the vacated room by multi-LSB substitution. Experimental results show that the embedding capacity of the proposed method outperforms the state-of-the-art methods.
As a technology that can prevent the information of original image and additional information from being disclosed, the reversible data hiding in encrypted images (RDHEI) has been widely concerned by researchers. How to further improve the performance of RDHEI methods has become a focus of research. To this end, this work proposes a high-capacity RDHEI method based on bit plane compression of prediction error. Firstly, to reserve the room for embedding information, the image owner rearranges and compresses the bit plane of prediction error. Next, the image after reserving room is encrypted with a serect key. Finally, the information hiding device embeds the additional information into the reserved room. This paper makes full use of the correlation between adjacent pixels. Experimental results show that this method can realize the real reversibility and provide higher embedding capacity than state-of-the-art works.
Light field (LF) representations aim to provide photo-realistic, free-viewpoint viewing experiences. However, the most popular LF representations are images from multiple views. Multi-view image-based representations generally need to restrict the range or degrees of freedom of the viewing experience to what can be interpolated in the image domain, essentially because they lack explicit geometry information. We present a new surface light field (SLF) representation based on explicit geometry, and a method for SLF compression. First, we map the multi-view images of a scene onto a 3D geometric point cloud. The color of each point in the point cloud is a function of viewing direction known as a view map. We represent each view map efficiently in a B-Spline wavelet basis. This representation is capable of modeling diverse surface materials and complex lighting conditions in a highly scalable and adaptive manner. The coefficients of the B-Spline wavelet representation are then compressed spatially. To increase the spatial correlation and thus improve compression efficiency, we introduce a smoothing term to make the coefficients more similar across the 3D space. We compress the coefficients spatially using existing point cloud compression (PCC) methods. On the decoder side, the scene is rendered efficiently from any viewing direction by reconstructing the view map at each point. In contrast to multi-view image-based LF approaches, our method supports photo-realistic rendering of real-world scenes from arbitrary viewpoints, i.e., with an unlimited six degrees of freedom (6DOF). In terms of rate and distortion, experimental results show that our method achieves superior performance with lighter decoder complexity compared with a reference image-plus-geometry compression (IGC) scheme, indicating its potential in practical virtual and augmented reality applications.
The dynamic portfolio optimization problem in finance frequently requires learning policies that adhere to various constraints, driven by investor preferences and risk. We motivate this problem of finding an allocation policy within a sequential decision making framework and study the effects of: (a) using data collected under previously employed policies, which may be sub-optimal and constraint-violating, and (b) imposing desired constraints while computing near-optimal policies with this data. Our framework relies on solving a minimax objective, where one player evaluates policies via off-policy estimators, and the opponent uses an online learning strategy to control constraint violations. We extensively investigate various choices for off-policy estimation and their corresponding optimization sub-routines, and quantify their impact on computing constraint-aware allocation policies. Our study shows promising results for constructing such policies when back-tested on historical equities data, under various regimes of operation, dimensionality and constraints.