ﻻ يوجد ملخص باللغة العربية
Teaching that uses projected presentation media such as slide-shows lacks support for dynamic content whose form and behaviors require live changes during a lecture. Recent software alternatives such as the Chalktalk software platform allow the creation of interactive simulations in arbitrary sequences and combinations within presentations. These more dynamic solutions, however, do not optimize for face-to-face interactions: eye-contact, gaze direction, and concurrent awareness of another persons movements together with the presented content. To explore the extent to which these face-to-face interactions may improve learning and engagement during a lecture, we propose a Mixed Reality (MR) platform that places Chalktalks behaviors and simulations within a mirrored virtual world environment designed for face-to-face, one-on-one interactions. We compare our system with projected Chalktalk to evaluate its relative effectiveness for learning, retention, and level of engagement.
In recent years, there has been an increasing interest in the use of robotic technology at home. A number of service robots appeared on the market, supporting customers in the execution of everyday tasks. Roughly at the same time, consumer level robo
Virtual Reality (VR) enables users to collaborate while exploring scenarios not realizable in the physical world. We propose CollabVR, a distributed multi-user collaboration environment, to explore how digital content improves expression and understa
With the popularity of online access in virtual reality (VR) devices, it will become important to investigate exclusive and interactive CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) designs for VR devices. In th
With the mounting global interest for optical see-through head-mounted displays (OST-HMDs) across medical, industrial and entertainment settings, many systems with different capabilities are rapidly entering the market. Despite such variety, they all
Can faces acquired by low-cost depth sensors be useful to catch some characteristic details of the face? Typically the answer is no. However, new deep architectures can generate RGB images from data acquired in a different modality, such as depth dat