ترغب بنشر مسار تعليمي؟ اضغط هنا

Calculation reduction method for color computer-generated hologram using color space conversion

140   0   0.0 ( 0 )
 نشر من قبل Tomoyoshi Shimobaba Dr.
 تاريخ النشر 2013
والبحث باللغة English




اسأل ChatGPT حول البحث

We report a calculation reduction method for color computer-generated holograms (CGHs) using color space conversion. Color CGHs are generally calculated on RGB space. In this paper, we calculate color CGHs in other color spaces: for example, YCbCr color space. In YCbCr color space, a RGB image is converted to the luminance component (Y), blue-difference chroma (Cb) and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well-recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space.



قيم البحث

اقرأ أيضاً

Multiple color stripes have been employed for structured light-based rapid range imaging to increase the number of uniquely identifiable stripes. The use of multiple color stripes poses two problems: (1) object surface color may disturb the stripe co lor and (2) the number of adjacent stripes required for identifying a stripe may not be maintained near surface discontinuities such as occluding boundaries. In this paper, we present methods to alleviate those problems. Log-gradient filters are employed to reduce the influence of object colors, and color stripes in two and three directions are used to increase the chance of identifying correct stripes near surface discontinuities. Experimental results demonstrate the effectiveness of our methods.
The representation of consistent mixed reality (XR) environments requires adequate real and virtual illumination composition in real-time. Estimating the lighting of a real scenario is still a challenge. Due to the ill-posed nature of the problem, cl assical inverse-rendering techniques tackle the problem for simple lighting setups. However, those assumptions do not satisfy the current state-of-art in computer graphics and XR applications. While many recent works solve the problem using machine learning techniques to estimate the environment light and scenes materials, most of them are limited to geometry or previous knowledge. This paper presents a CNN-based model to estimate complex lighting for mixed reality environments with no previous information about the scene. We model the environment illumination using a set of spherical harmonics (SH) environment lighting, capable of efficiently represent area lighting. We propose a new CNN architecture that inputs an RGB image and recognizes, in real-time, the environment lighting. Unlike previous CNN-based lighting estimation methods, we propose using a highly optimized deep neural network architecture, with a reduced number of parameters, that can learn high complex lighting scenarios from real-world high-dynamic-range (HDR) environment images. We show in the experiments that the CNN architecture can predict the environment lighting with an average mean squared error (MSE) of um{7.85e-04} when comparing SH lighting coefficients. We validate our model in a variety of mixed reality scenarios. Furthermore, we present qualitative results comparing relights of real-world scenes.
Research interest in rapid structured-light imaging has grown increasingly for the modeling of moving objects, and a number of methods have been suggested for the range capture in a single video frame. The imaging area of a 3D object using a single p rojector is restricted since the structured light is projected only onto a limited area of the object surface. Employing additional projectors to broaden the imaging area is a challenging problem since simultaneous projection of multiple patterns results in their superposition in the light-intersected areas and the recognition of original patterns is by no means trivial. This paper presents a novel method of multi-projector color structured-light vision based on projector-camera triangulation. By analyzing the behavior of superposed-light colors in a chromaticity domain, we show that the original light colors cannot be properly extracted by the conventional direct estimation. We disambiguate multiple projectors by multiplexing the orientations of projector patterns so that the superposed patterns can be separated by explicit derivative computations. Experimental studies are carried out to demonstrate the validity of the presented method. The proposed method increases the efficiency of range acquisition compared to conventional active stereo using multiple projectors.
90 - Asad Khan , Luo Jiang , Wei Li 2016
Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect.
We present a simple but effective technique for measuring angular variation in $R_V$ across the sky. We divide stars from the Pan-STARRS1 catalog into Healpix pixels and determine the posterior distribution of reddening and $R_V$ for each pixel using two independent Monte Carlo methods. We find the two methods to be self-consistent in the limits where they are expected to perform similarly. We also find some agreement with high-precision photometric studies of $R_V$ in Perseus and Ophiuchus, as well as with a map of reddening near the Galactic plane based on stellar spectra from APOGEE. While current studies of $R_V$ are mostly limited to isolated clouds, we have developed a systematic method for comparing $R_V$ values for the majority of observable dust. This is a proof of concept for a more rigorous Galactic reddening map.x
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا