ترغب بنشر مسار تعليمي؟ اضغط هنا

Augment Yourself: Mixed Reality Self-Augmentation Using Optical See-through Head-mounted Displays and Physical Mirrors

89   0   0.0 ( 0 )
 نشر من قبل Mathias Unberath
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Optical see-though head-mounted displays (OST HMDs) are one of the key technologies for merging virtual objects and physical scenes to provide an immersive mixed reality (MR) environment to its user. A fundamental limitation of HMDs is, that the user itself cannot be augmented conveniently as, in casual posture, only the distal upper extremities are within the field of view of the HMD. Consequently, most MR applications that are centered around the user, such as virtual dressing rooms or learning of body movements, cannot be realized with HMDs. In this paper, we propose a novel concept and prototype system that combines OST HMDs and physical mirrors to enable self-augmentation and provide an immersive MR environment centered around the user. Our system, to the best of our knowledge the first of its kind, estimates the users pose in the virtual image generated by the mirror using an RGBD camera attached to the HMD and anchors virtual objects to the reflection rather than the user directly. We evaluate our system quantitatively with respect to calibration accuracy and infrared signal degradation effects due to the mirror, and show its potential in applications where large mirrors are already an integral part of the facility. Particularly, we demonstrate its use for virtual fitting rooms, gaming applications, anatomy learning, and personal fitness. In contrast to competing devices such as LCD-equipped smart mirrors, the proposed system consists of only an HMD with RGBD camera and, thus, does not require a prepared environment making it very flexible and generic. In future work, we will aim to investigate how the system can be optimally used for physical rehabilitation and personal training as a promising application.

قيم البحث

اقرأ أيضاً

Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, ro bots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearers real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16% increase in accuracy with a 62% decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines.
Head gesture is a natural means of face-to-face communication between people but the recognition of head gestures in the context of virtual reality and use of head gesture as an interface for interacting with virtual avatars and virtual environments have been rarely investigated. In the current study, we present an approach for real-time head gesture recognition on head-mounted displays using Cascaded Hidden Markov Models. We conducted two experiments to evaluate our proposed approach. In experiment 1, we trained the Cascaded Hidden Markov Models and assessed the offline classification performance using collected head motion data. In experiment 2, we characterized the real-time performance of the approach by estimating the latency to recognize a head gesture with recorded real-time classification data. Our results show that the proposed approach is effective in recognizing head gestures. The method can be integrated into a virtual reality system as a head gesture interface for interacting with virtual worlds.
Purpose: Image guidance is crucial for the success of many interventions. Images are displayed on designated monitors that cannot be positioned optimally due to sterility and spatial constraints. This indirect visualization causes potential occlusion , hinders hand-eye coordination, leads to increased procedure duration and surgeon load. Methods: We propose a virtual monitor system that displays medical images in a mixed reality visualization using optical see-through head-mounted displays. The system streams high-resolution medical images from any modality to the head-mounted display in real-time that are blended with the surgical site. It allows for mixed reality visualization of images in head-, world-, or body-anchored mode and can thus be adapted to specific procedural needs. Results: For typical image sizes, the proposed system exhibits an average end-to-end delay and refresh rate of 214 +- 30 ms and 41:4 +- 32:0 Hz, respectively. Conclusions: The proposed virtual monitor system is capable of real-time mixed reality visualization of medical images. In future, we seek to conduct first pre-clinical studies to quantitatively assess the impact of the system on standard image guided procedures.
With the mounting global interest for optical see-through head-mounted displays (OST-HMDs) across medical, industrial and entertainment settings, many systems with different capabilities are rapidly entering the market. Despite such variety, they all require display calibration to create a proper mixed reality environment. With the aid of tracking systems, it is possible to register rendered graphics with tracked objects in the real world. We propose a calibration procedure to properly align the coordinate system of a 3D virtual scene that the user sees with that of the tracker. Our method takes a blackbox approach towards the HMD calibration, where the trackers data is its input and the 3D coordinates of a virtual object in the observers eye is the output; the objective is thus to find the 3D projection that aligns the virtual content with its real counterpart. In addition, a faster and more intuitive version of this calibration is introduced in which the user simultaneously aligns multiple points of a single virtual 3D object with its real counterpart; this reduces the number of required repetitions in the alignment from 20 to only 4, which leads to a much easier calibration task for the user. In this paper, both internal (HMD camera) and external tracking systems are studied. We perform experiments with Microsoft HoloLens, taking advantage of its self localization and spatial mapping capabilities to eliminate the requirement for line of sight from the HMD to the object or external tracker. The experimental results indicate an accuracy of up to 4 mm in the average reprojection error based on two separate evaluation methods. We further perform experiments with the internal tracking on the Epson Moverio BT-300 to demonstrate that the method can provide similar results with other HMDs.
Mobile virtual reality (VR) head mounted displays (HMD) have become popular among consumers in recent years. In this work, we demonstrate real-time egocentric hand gesture detection and localization on mobile HMDs. Our main contributions are: 1) A no vel mixed-reality data collection tool to automatic annotate bounding boxes and gesture labels; 2) The largest-to-date egocentric hand gesture and bounding box dataset with more than 400,000 annotated frames; 3) A neural network that runs real time on modern mobile CPUs, and achieves higher than 76% precision on gesture recognition across 8 classes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا