No Arabic abstract
The regular K-10 curriculums often do not get the necessary of affordable technology involving interactive ways of teaching the prescribed curriculum with effective analytical skill building. In this paper, we present PlutoAR, a paper-based augmented reality interpreter which is scalable, affordable, portable and can be used as a platform for skill building for the kids. PlutoAR manages to overcome the conventional albeit non-interactive ways of teaching by incorporating augmented reality (AR) through an interactive toolkit to provide students with the best of both worlds. Students cut out paper tiles and place these tiles one by one on a larger paper surface called Launchpad and use the PlutoAR mobile application which runs on any Android device with a camera and uses augmented reality to output each step of the program like an interpreter. PlutoAR has inbuilt AR experiences like stories, maze solving using conditional loops, simple elementary mathematics and the intuition of gravity.
Augmented Reality (AR) bridges the gap between the physical and virtual world. Through overlaying graphics on natural environments, users can immerse themselves in a tailored environment. This offers great benefits to mobile tourism, where points of interest (POIs) can be annotated on a smartphone screen. While a variety of apps currently exist, usability issues can discourage users from embracing AR. Interfaces can become cluttered with icons, with POI occlusion posing further challenges. In this paper, we use user-centred design (UCD) to develop an AR tourism app. We solicit requirements through a synthesis of domain analysis, tourist observation and semi-structured interviews. Whereas previous user-centred work has designed mock-ups, we iteratively develop a full Android app. This includes overhead maps and route navigation, in addition to a detailed AR browser. The final product is evaluated by 20 users, who participate in a tourism task in a UK city. Users regard the system as usable and intuitive, and suggest the addition of further customisation. We finish by critically analysing the challenges of a user-centred methodology.
With the development of advanced communication technology, connected vehicles become increasingly popular in our transportation systems, which can conduct cooperative maneuvers with each other as well as road entities through vehicle-to-everything communication. A lot of research interests have been drawn to other building blocks of a connected vehicle system, such as communication, planning, and control. However, less research studies were focused on the human-machine cooperation and interface, namely how to visualize the guidance information to the driver as an advanced driver-assistance system (ADAS). In this study, we propose an augmented reality (AR)-based ADAS, which visualizes the guidance information calculated cooperatively by multiple connected vehicles. An unsignalized intersection scenario is adopted as the use case of this system, where the driver can drive the connected vehicle crossing the intersection under the AR guidance, without any full stop at the intersection. A simulation environment is built in Unity game engine based on the road network of San Francisco, and human-in-the-loop (HITL) simulation is conducted to validate the effectiveness of our proposed system regarding travel time and energy consumption.
This study considers modern surgical navigation systems based on augmented reality technologies. Augmented reality glasses are used to construct holograms of the patients organs from MRI and CT data, subsequently transmitted to the glasses. This, in addition to seeing the actual patient, the surgeon gains visualization inside the patients body (bones, soft tissues, blood vessels, etc.). The solutions developed at Peter the Great St. Petersburg Polytechnic University allow reducing the invasiveness of the procedure and preserving healthy tissues. This also improves the navigation process, making it easier to estimate the location and size of the tumor to be removed. We describe the application of developed systems to different types of surgical operations (removal of a malignant brain tumor, removal of a cyst of the cervical spine). We consider the specifics of novel navigation systems designed for anesthesia, for endoscopic operations. Furthermore, we discuss the construction of novel visualization systems for ultrasound machines. Our findings indicate that the technologies proposed show potential for telemedicine.
Short-form digital storytelling has become a popular medium for millions of people to express themselves. Traditionally, this medium uses primarily 2D media such as text (e.g., memes), images (e.g., Instagram), gifs (e.g., Giphy), and videos (e.g., TikTok, Snapchat). To expand the modalities from 2D to 3D media, we present SceneAR, a smartphone application for creating sequential scene-based micro narratives in augmented reality (AR). What sets SceneAR apart from prior work is the ability to share the scene-based stories as AR content -- no longer limited to sharing images or videos, these narratives can now be experienced in peoples own physical environments. Additionally, SceneAR affords users the ability to remix AR, empowering them to build-upon others creations collectively. We asked 18 people to use SceneAR in a 3-day study. Based on user interviews, analysis of screen recordings, and the stories they created, we extracted three themes. From those themes and the study overall, we derived six strategies for designers interested in supporting short-form AR narratives.
We present an early study designed to analyze how city planning and the health of senior citizens can benefit from the use of augmented reality (AR) using Microsofts HoloLens. We also explore whether AR and VR can be used to help city planners receive real-time feedback from citizens, such as the elderly, on virtual plans, allowing for informed decisions to be made before any construction begins.