Do you want to publish a course? Click here

Movie Editing and Cognitive Event Segmentation in Virtual Reality Video

278   0   0.0 ( 0 )
 Added by Ana Serrano
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in visual content across cuts, viewers in general experience no trouble perceiving the discontinuous flow of information as a coherent set of events. However, Virtual Reality (VR) movies are intrinsically different from traditional movies in that the viewer controls the camera orientation at all times. As a consequence, common editing techniques that rely on camera orientations, zooms, etc., cannot be used. In this paper we investigate key relevant questions to understand how well traditional movie editing carries over to VR. To do so, we rely on recent cognition studies and the event segmentation theory, which states that our brains segment continuous actions into a series of discrete, meaningful events. We first replicate one of these studies to assess whether the predictions of such theory can be applied to VR. We next gather gaze data from viewers watching VR videos containing different edits with varying parameters, and provide the first systematic analysis of viewers behavior and the perception of continuity in VR. From this analysis we make a series of relevant findings; for instance, our data suggests that predictions from the cognitive event segmentation theory are useful guides for VR editing; that different types of edits are equally well understood in terms of continuity; and that spatial misalignments between regions of interest at the edit boundaries favor a more exploratory behavior even after viewers have fixated on a new region of interest. In addition, we propose a number of metrics to describe viewers attentional behavior in VR. We believe the insights derived from our work can be useful as guidelines for VR content creation.



rate research

Read More

Traditional high-quality 3D graphics requires large volumes of fine-detailed scene data for rendering. This demand compromises computational efficiency and local storage resources. Specifically, it becomes more concerning for future wearable and portable virtual and augmented reality (VR/AR) displays. Recent approaches to combat this problem include remote rendering/streaming and neural representations of 3D assets. These approaches have redefined the traditional local storage-rendering pipeline by distributed computing or compression of large data. However, these methods typically suffer from high latency or low quality for practical visualization of large immersive virtual scenes, notably with extra high resolution and refresh rate requirements for VR applications such as gaming and design. Tailored for the future portable, low-storage, and energy-efficient VR platforms, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. Furthermore, we jointly optimize the latency/performance and visual quality, while mutually bridging human perception and neural scene synthesis, to achieve perceptually high-quality immersive interaction. Both objective analysis and subjective study demonstrate the effectiveness of our approach in significantly reducing local storage volume and synthesis latency (up to 99% reduction in both data size and computational time), while simultaneously presenting high-fidelity rendering, with perceptual quality identical to that of fully locally stored and rendered high-quality imagery.
156 - Rahul Arora , Karan Singh 2020
Complex 3D curves can be created by directly drawing mid-air in immersive environments (Augmented and Virtual Realities). Drawing mid-air strokes precisely on the surface of a 3D virtual object, however, is difficult; necessitating a projection of the mid-air stroke onto the user intended surface curve. We present the first detailed investigation of the fundamental problem of 3D stroke projection in VR. An assessment of the design requirements of real-time drawing of curves on 3D objects in VR is followed by the definition and classification of multiple techniques for 3D stroke projection. We analyze the advantages and shortcomings of these approaches both theoretically and via practical pilot testing. We then formally evaluate the two most promising techniques spraycan and mimicry with 20 users in VR. The study shows a strong qualitative and quantitative user preference for our novel stroke mimicry projection algorithm. We further illustrate the effectiveness and utility of stroke mimicry, to draw complex 3D curves on surfaces for various artistic and functional design applications.
Interaction in virtual reality (VR) environments is essential to achieve a pleasant and immersive experience. Most of the currently existing VR applications, lack of robust object grasping and manipulation, which are the cornerstone of interactive systems. Therefore, we propose a realistic, flexible and robust grasping system that enables rich and real-time interactions in virtual environments. It is visually realistic because it is completely user-controlled, flexible because it can be used for different hand configurations, and robust because it allows the manipulation of objects regardless their geometry, i.e. hand is automatically fitted to the object shape. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On the one hand, qualitative evaluation was used in the assessment of the abstract aspects such as: hand movement realism, interaction realism and motor control. On the other hand, for the quantitative evaluation a novel error metric has been proposed to visually analyze the performed grips. This metric is based on the computation of the distance from the finger phalanges to the nearest contact point on the object surface. These contact points can be used with different application purposes, mainly in the field of robotics. As a conclusion, system evaluation reports a similar performance between users with previous experience in virtual reality applications and inexperienced users, referring to a steep learning curve.
Providing a depth-rich Virtual Reality (VR) experience to users without causing discomfort remains to be a challenge with todays commercially available head-mounted displays (HMDs), which enforce strict measures on stereoscopic camera parameters for the sake of keeping visual discomfort to a minimum. However, these measures often lead to an unimpressive VR experience with shallow depth feeling. We propose the first method ready to be used with existing consumer HMDs for automated stereoscopic camera control in virtual environments (VEs). Using radial basis function interpolation and projection matrix manipulations, our method makes it possible to significantly enhance user experience in terms of overall perceived depth while maintaining visual discomfort on a par with the default arrangement. In our implementation, we also introduce the first immersive interface for authoring a unique 3D stereoscopic cinematography for any VE to be experienced with consumer HMDs. We conducted a user study that demonstrates the benefits of our approach in terms of superior picture quality and perceived depth. We also investigated the effects of using depth of field (DoF) in combination with our approach and observed that the addition of our DoF implementation was seen as a degraded experience, if not similar.
Conventional planar video streaming is the most popular application in mobile systems and the rapid growth of 360 video content and virtual reality (VR) devices are accelerating the adoption of VR video streaming. Unfortunately, video streaming consumes significant system energy due to the high power consumption of the system components (e.g., DRAM, display interfaces, and display panel) involved in this process. We propose BurstLink, a novel system-level technique that improves the energy efficiency of planar and VR video streaming. BurstLink is based on two key ideas. First, BurstLink directly transfers a decoded video frame from the host system to the display panel, bypassing the host DRAM. To this end, we extend the display panel with a double remote frame buffer (DRFB), instead of the DRAMs double frame buffer, so that the system can directly update the DRFB with a new frame while updating the panels pixels with the current frame stored in the DRFB. Second, BurstLink transfers a complete decoded frame to the display panel in a single burst, using the maximum bandwidth of modern display interfaces. Unlike conventional systems where the frame transfer rate is limited by the pixel-update throughput of the display panel, BurstLink can always take full advantage of the high bandwidth of modern display interfaces by decoupling the frame transfer from the pixel update as enabled by the DRFB. This direct and burst frame transfer of BurstLink significantly reduces energy consumption in video display by reducing access to the host DRAM and increasing the systems residency at idle power states. We evaluate BurstLink using an analytical power model that we rigorously validate on a real modern mobile system. Our evaluation shows that BurstLink reduces system energy consumption for 4K planar and VR video streaming by 41% and 33%, respectively.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا