Do you want to publish a course? Click here

Gaze-Contingent Retinal Speckle Suppression for Perceptually-Matched Foveated Holographic Displays

90   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Computer-generated holographic (CGH) displays show great potential and are emerging as the next-generation displays for augmented and virtual reality, and automotive heads-up displays. One of the critical problems harming the wide adoption of such displays is the presence of speckle noise inherent to holography, that compromises its quality by introducing perceptible artifacts. Although speckle noise suppression has been an active research area, the previous works have not considered the perceptual characteristics of the Human Visual System (HVS), which receives the final displayed imagery. However, it is well studied that the sensitivity of the HVS is not uniform across the visual field, which has led to gaze-contingent rendering schemes for maximizing the perceptual quality in various computer-generated imagery. Inspired by this, we present the first method that reduces the perceived speckle noise by integrating foveal and peripheral vision characteristics of the HVS, along with the retinal point spread function, into the phase hologram computation. Specifically, we introduce the anatomical and statistical retinal receptor distribution into our computational hologram optimization, which places a higher priority on reducing the perceived foveal speckle noise while being adaptable to any individuals optical aberration on the retina. Our method demonstrates superior perceptual quality on our emulated holographic display. Our evaluations with objective measurements and subjective studies demonstrate a significant reduction of the human perceived noise.



rate research

Read More

Emergent in the field of head mounted display design is a desire to leverage the limitations of the human visual system to reduce the computation, communication, and display workload in power and form-factor constrained systems. Fundamental to this reduced workload is the ability to match display resolution to the acuity of the human visual system, along with a resulting need to follow the gaze of the eye as it moves, a process referred to as foveation. A display that moves its content along with the eye may be called a Foveated Display, though this term is also commonly used to describe displays with non-uniform resolution that attempt to mimic human visual acuity. We therefore recommend a definition for the term Foveated Display that accepts both of these interpretations. Furthermore, we include a simplified model for human visual Acuity Distribution Functions (ADFs) at various levels of visual acuity, across wide fields of view and propose comparison of this ADF with the Resolution Distribution Function of a foveated display for evaluation of its resolution at a particular gaze direction. We also provide a taxonomy to allow the field to meaningfully compare and contrast various aspects of foveated displays in a display and optical technology-agnostic manner.
Virtual and augmented reality (VR/AR) displays strive to provide a resolution, framerate and field of view that matches the perceptual capabilities of the human visual system, all while constrained by limited compute budgets and transmission bandwidths of wearable computing systems. Foveated graphics techniques have emerged that could achieve these goals by exploiting the falloff of spatial acuity in the periphery of the visual field. However, considerably less attention has been given to temporal aspects of human vision, which also vary across the retina. This is in part due to limitations of current eccentricity-dependent models of the visual system. We introduce a new model, experimentally measuring and computationally fitting eccentricity-dependent critical flicker fusion thresholds jointly for both space and time. In this way, our model is unique in enabling the prediction of temporal information that is imperceptible for a certain spatial frequency, eccentricity, and range of luminance levels. We validate our model with an image quality user study, and use it to predict potential bandwidth savings 7x higher than those afforded by current spatial-only foveated models. As such, this work forms the enabling foundation for new temporally foveated graphics techniques.
Video conferences play a vital role in our daily lives. However, many nonverbal cues are missing, including gaze and spatial information. We introduce LookAtChat, a web-based video conferencing system, which empowers remote users to identify gaze awareness and spatial relationships in small-group conversations. Leveraging real-time eye-tracking technology available with ordinary webcams, LookAtChat tracks each users gaze direction, identifies who is looking at whom, and provides corresponding spatial cues. Informed by formative interviews with 5 participants who regularly use videoconferencing software, we explored the design space of gaze visualization in both 2D and 3D layouts. We further conducted an exploratory user study (N=20) to evaluate LookAtChat in three conditions: baseline layout, 2D directional layout, and 3D perspective layout. Our findings demonstrate how LookAtChat engages participants in small-group conversations, how gaze and spatial information improve conversation quality, and the potential benefits and challenges to incorporating gaze awareness visualization into existing videoconferencing systems.
Novel display technologies aim at providing the users with increasingly immersive experiences. In this regard, it is a long-sought dream to generate three-dimensional (3D) scenes with high resolution and continuous depth, which can be overlaid with the real world. Current attempts to do so, however, fail in providing either truly 3D information, or a large viewing area and angle, strongly limiting the user immersion. Here, we report a proof-of-concept solution for this problem, and realize a compact holographic 3D near-eye display with a large exit pupil of 10mm x 8.66mm. The 3D image is generated from a highly transparent Huygens metasurface hologram with large (>10^8) pixel count and subwavelength pixels, fabricated via deep-ultraviolet immersion photolithography on 300 mm glass wafers. We experimentally demonstrate high quality virtual 3D scenes with ~50k active data points and continuous depth ranging from 0.5m to 2m, overlaid with the real world and easily viewed by naked eye. To do so, we introduce a new design method for holographic near-eye displays that, inherently, is able to provide both parallax and accommodation cues, fundamentally solving the vergence-accommodation conflict that exists in current commercial 3D displays.
187 - Rob Fergus 2014
High dynamic-range imagers aim to block out or null light from a very bright primary star to make it possible to detect and measure far fainter companions; in real systems a small fraction of the primary light is scattered, diffracted, and unocculted. We introduce S4, a flexible data-driven model for the unocculted (and highly speckled) light in the P1640 spectroscopic coronograph. The model uses Principal Components Analysis (PCA) to capture the spatial structure and wavelength dependence of the speckles but not the signal produced by any companion. Consequently, the residual typically includes the companion signal. The companion can thus be found by filtering this error signal with a fixed companion model. The approach is sensitive to companions that are of order a percent of the brightness of the speckles, or up to $10^{-7}$ times the brightness of the primary star. This outperforms existing methods by a factor of 2-3 and is close to the shot-noise physical limit.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا