No Arabic abstract
Despite the excessive developments of architectural parametric platforms, parametric design is often interpreted as an architectural style rather than a computational method. Also, the problem is still a lack of knowledge and skill about the technical application of parametric design in architectural modelling. Students often dive into utilizing complex digital modelling without having a competent pedagogical context to learn algorithmic thinking and the corresponding logic behind digital and parametric modelling. The insufficient skills and superficial knowledge often result in utilizing the modelling software through trial and error, not taking full advantage of what it has to offer. Geometric transformations as the fundamental functions of parametric modelling is explored in this study to anchor learning essential components in parametric modelling. Students need to understand the differences between variables, parameters, functions and their relations. Fologram, an Augmented Reality tool, is utilized in this study to learn geometric transformation and its components in an intuitive way. A LEGO set is used as an editable physical model to improve spatial skill through hand movement beside an instant feedback in the physical environment.
We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.
We contribute MobileVisFixer, a new method to make visualizations more mobile-friendly. Although mobile devices have become the primary means of accessing information on the web, many existing visualizations are not optimized for small screens and can lead to a frustrating user experience. Currently, practitioners and researchers have to engage in a tedious and time-consuming process to ensure that their designs scale to screens of different sizes, and existing toolkits and libraries provide little support in diagnosing and repairing issues. To address this challenge, MobileVisFixer automates a mobile-friendly visualization re-design process with a novel reinforcement learning framework. To inform the design of MobileVisFixer, we first collected and analyzed SVG-based visualizations on the web, and identified five common mobile-friendly issues. MobileVisFixer addresses four of these issues on single-view Cartesian visualizations with linear or discrete scales by a Markov Decision Process model that is both generalizable across various visualizations and fully explainable. MobileVisFixer deconstructs charts into declarative formats, and uses a greedy heuristic based on Policy Gradient methods to find solutions to this difficult, multi-criteria optimization problem in reasonable time. In addition, MobileVisFixer can be easily extended with the incorporation of optimization algorithms for data visualizations. Quantitative evaluation on two real-world datasets demonstrates the effectiveness and generalizability of our method.
Modular forms are highly self-symmetric functions studied in number theory, with connections to several areas of mathematics. But they are rarely visualized. We discuss ongoing work to compute and visualize modular forms as 3D surfaces and to use these techniques to make videos flying around the peaks and canyons of these modular terrains. Our goal is to make beautiful visualizations exposing the symmetries of these functions.
There is a nationwide drive to get more girls into physics and coding, and some educators believe gaming could be a way to get girls interested in coding and STEM topics. This project, sponsored by NSF, is to create a QCD game that will raise public interest in QCD, especially among K-12 girls, and increase interest in coding among girls. Through the immersive framework of interactive gameplay, this QCD phone game will allow the public to peek into the QCD research world. The game design will fall into the Match 3 genre, which typically attracts a higher ratio of female players. The game will be implemented initially as a phone app, and the gameplay would require learning simple QCD rules to progress. By leveraging the willingness of players to engage with the rules of an entertaining game, they are able to easily learn a few principles of physics. The game is now available to download from the Google Play store (https://play.google.com/store/apps/details?id=com.gellab.quantum3) and the Apple Appstore (https://itunes.apple.com/gb/app/quantum-3/id1406630529)! We formed a development team of MSU undergraduate students to make the game and provided them with a QCD curriculum. The game will be tested at MSU outreach activities, as well as among local K-12 girls through school activities, and feedback will be used to improve the design. The final game can be easily distributed through various app stores and impact will be measured through a follow-up survey. If such a new direction works to attract more girls to coding and physics, one should develop more games to engage more girls in STEM.
We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimensional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.