Do you want to publish a course? Click here

VXSlate: Combining Head Movement and Mobile Touch for Large Virtual Display Interaction

70   0   0.0 ( 0 )
 Added by Karol Chlasta
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Virtual Reality (VR) headsets can open opportunities for users to accomplish complex tasks on large virtual displays, using compact setups. However, interacting with large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulations, due to the lack of physical surfaces. We designed VXSlate, an interaction technique that uses a large virtual display, as an expansion of a tablet. VXSlate combines a users headmovement, as tracked by the VR headset, and touch interaction on the tablet. The users headmovement position both a virtual representation of the tablet and of the users hand on the large virtual display. The users multi-touch interactions perform finely-tuned content manipulations.



rate research

Read More

With the mounting global interest for optical see-through head-mounted displays (OST-HMDs) across medical, industrial and entertainment settings, many systems with different capabilities are rapidly entering the market. Despite such variety, they all require display calibration to create a proper mixed reality environment. With the aid of tracking systems, it is possible to register rendered graphics with tracked objects in the real world. We propose a calibration procedure to properly align the coordinate system of a 3D virtual scene that the user sees with that of the tracker. Our method takes a blackbox approach towards the HMD calibration, where the trackers data is its input and the 3D coordinates of a virtual object in the observers eye is the output; the objective is thus to find the 3D projection that aligns the virtual content with its real counterpart. In addition, a faster and more intuitive version of this calibration is introduced in which the user simultaneously aligns multiple points of a single virtual 3D object with its real counterpart; this reduces the number of required repetitions in the alignment from 20 to only 4, which leads to a much easier calibration task for the user. In this paper, both internal (HMD camera) and external tracking systems are studied. We perform experiments with Microsoft HoloLens, taking advantage of its self localization and spatial mapping capabilities to eliminate the requirement for line of sight from the HMD to the object or external tracker. The experimental results indicate an accuracy of up to 4 mm in the average reprojection error based on two separate evaluation methods. We further perform experiments with the internal tracking on the Epson Moverio BT-300 to demonstrate that the method can provide similar results with other HMDs.
414 - Anton Andreev 2019
In this article, we explore the availability of head-mounted display (HMD) devices which can be coupled in a seamless way with P300-based brain-computer interfaces (BCI) using electroencephalography (EEG). The P300 is an event-related potential appearing about 300ms after the onset of a stimulation. The recognition of this potential on the ongoing EEG requires the knowledge of the exact onset of the stimuli. In other words, the stimulations presented in the HMD must be perfectly synced with the acquisition of the EEG signal. This is done through a process called tagging. The tagging must be performed in a reliable and robust way so as to guarantee the recognition of the P300 and thus the performance of the BCI. An HMD device should also be able to render images fast enough to allow an accurate perception of the stimulations, and equally to not perturb the acquisition of the EEG signal. In addition, an affordable HMD device is needed for both research and entertainment purposes. In this study, we selected and tested two HMD configurations.
71 - Gregoire Cattan 2020
A brain-computer interface (BCI) based on electroencephalography (EEG) is a promising technology for enhancing virtual reality (VR) applications-in particular, for gaming. We focus on the so-called P300-BCI, a stable and accurate BCI paradigm relying on the recognition of a positive event-related potential (ERP) occurring in the EEG about 300 ms post-stimulation. We implemented a basic version of such a BCI displayed on an ordinary and affordable smartphone-based head-mounted VR device: that is, a mobile and passive VR system (with no electronic components beyond the smartphone). The mobile phone performed the stimuli presentation, EEG synchronization (tagging) and feedback display. We compared the ERPs and the accuracy of the BCI on the VR device with a traditional BCI running on a personal computer (PC). We also evaluated the impact of subjective factors on the accuracy. The study was within-subjects, with 21 participants and one session in each modality. No significant difference in BCI accuracy was found between the PC and VR systems, although the P200 ERP was significantly wider and larger in the VR system as compared to the PC system.
Recent research has proposed teleoperation of robotic and aerial vehicles using head motion tracked by a head-mounted display (HMD). First-person views of the vehicles are usually captured by onboard cameras and presented to users through the display panels of HMDs. This provides users with a direct, immersive and intuitive interface for viewing and control. However, a typically overlooked factor in such designs is the latency introduced by the vehicle dynamics. As head motion is coupled with visual updates in such applications, visual and control latency always exists between the issue of control commands by head movements and the visual feedback received at the completion of the attitude adjustment. This causes a discrepancy between the intended motion, the vestibular cue and the visual cue and may potentially result in simulator sickness. No research has been conducted on how various levels of visual and control latency introduced by dynamics in robots or aerial vehicles affect users performance and the degree of simulator sickness elicited. Thus, it is uncertain how much performance is degraded by latency and whether such designs are comfortable from the perspective of users. To address these issues, we studied a prototyped scenario of a head motion controlled quadcopter using an HMD. We present a virtual reality (VR) paradigm to systematically assess the effects of visual and control latency in simulated drone control scenarios.
We present PhyShare, a new haptic user interface based on actuated robots. Virtual reality has recently been gaining wide adoption, and an effective haptic feedback in these scenarios can strongly support users sensory in bridging virtual and physical world. Since participants do not directly observe these robotic proxies, we investigate the multiple mappings between physical robots and virtual proxies that can utilize the resources needed to provide a well rounded VR experience. PhyShare bots can act either as directly touchable objects or invisible carriers of physical objects, depending on different scenarios. They also support distributed collaboration, allowing remotely located VR collaborators to share the same physical feedback.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا