Do you want to publish a course? Click here

Interacting with Acoustic Simulation and Fabrication

89   0   0.0 ( 0 )
 Added by Dingzeyu Li
 Publication date 2017
and research's language is English
 Authors Dingzeyu Li




Ask ChatGPT about the research

Incorporating accurate physics-based simulation into interactive design tools is challenging. However, adding the physics accurately becomes crucial to several emerging technologies. For example, in virtual/augmented reality (VR/AR) videos, the faithful reproduction of surrounding audios is required to bring the immersion to the next level. Similarly, as personal fabrication is made possible with accessible 3D printers, more intuitive tools that respect the physical constraints can help artists to prototype designs. One main hurdle is the sheer amount of computation complexity to accurately reproduce the real-world phenomena through physics-based simulation. In my thesis research, I develop interactive tools that implement efficient physics-based simulation algorithms for automatic optimization and intuitive user interaction.



rate research

Read More

We present AirCode, a technique that allows the user to tag physically fabricated objects with given information. An AirCode tag consists of a group of carefully designed air pockets placed beneath the object surface. These air pockets are easily produced during the fabrication process of the object, without any additional material or postprocessing. Meanwhile, the air pockets affect only the scattering light transport under the surface, and thus are hard to notice to our naked eyes. But, by using a computational imaging method, the tags become detectable. We present a tool that automates the design of air pockets for the user to encode information. AirCode system also allows the user to retrieve the information from captured images via a robust decoding algorithm. We demonstrate our tagging technique with applications for metadata embedding, robotic grasping, as well as conveying object affordances.
We present a system that allows users to visualize complex human motion via 3D motion sculptures---a representation that conveys the 3D structure swept by a human body as it moves through space. Given an input video, our system computes the motion sculptures and provides a user interface for rendering it in different styles, including the options to insert the sculpture back into the original video, render it in a synthetic scene or physically print it. To provide this end-to-end workflow, we introduce an algorithm that estimates that humans 3D geometry over time from a set of 2D images and develop a 3D-aware image-based rendering approach that embeds the sculpture back into the scene. By automating the process, our system takes motion sculpture creation out of the realm of professional artists, and makes it applicable to a wide range of existing video material. By providing viewers with 3D information, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eye, and allow viewers to interpret how different parts of the object interact over time. We validate the effectiveness of this approach with user studies, finding that our motion sculpture visualizations are significantly more informative about motion than existing stroboscopic and space-time visualization methods.
This paper presents a study regarding group behavior in a controlled experiment focused on differences in an important attribute that vary across cultures -- the personal spaces -- in two Countries: Brazil and Germany. In order to coherently compare Germany and Brazil evolutions with same population applying same task, we performed the pedestrian Fundamental Diagram experiment in Brazil, as performed in Germany. We use CNNs to detect and track people in video sequences. With this data, we use Voronoi Diagrams to find out the neighbor relation among people and then compute the walking distances to find out the personal spaces. Based on personal spaces analyses, we found out that people behavior is more similar, in terms of their behaviours, in high dense populations and vary more in low and medium densities. So, we focused our study on cultural differences between the two Countries in low and medium densities. Results indicate that personal space analyses can be a relevant feature in order to understand cultural aspects in video sequences. In addition to the cultural differences, we also investigate the personality model in crowds, using OCEAN. We also proposed a way to simulate the FD experiment from other countries using the OCEAN psychological traits model as input. The simulated countries were consistent with the literature.
116 - Michael J. Hartmann 2016
Enhancing optical nonlinearities so that they become appreciable on the single photon level and lead to nonclassical light fields has been a central objective in quantum optics for many years. After this has been achieved in individual micro-cavities representing an effectively zero-dimensional volume, this line of research has shifted its focus towards engineering devices where such strong optical nonlinearities simultaneously occur in extended volumes of multiple nodes of a network. Recent technological progress in several experimental platforms now opens the possibility to employ the systems of strongly interacting photons these give rise to as quantum simulators. Here we review the recent development and current status of this research direction for theory and experiment. Addressing both, optical photons interacting with atoms and microwave photons in networks of superconducting circuits, we focus on analogue quantum simulations in scenarios where effective photon-photon interactions exceed dissipative processes in the considered platforms.
Providing pedestrians and other vulnerable road users with a clear indication about a fully autonomous vehicle status and intentions is crucial to make them coexist. In the last few years, a variety of external interfaces have been proposed, leveraging different paradigms and technologies including vehicle-mounted devices (like LED panels), short-range on-road projections, and road infrastructure interfaces (e.g., special asphalts with embedded displays). These designs were experimented in different settings, using mockups, specially prepared vehicles, or virtual environments, with heterogeneous evaluation metrics. Promising interfaces based on Augmented Reality (AR) have been proposed too, but their usability and effectiveness have not been tested yet. This paper aims to complement such body of literature by presenting a comparison of state-of-the-art interfaces and new designs under common conditions. To this aim, an immersive Virtual Reality-based simulation was developed, recreating a well-known scenario represented by pedestrians crossing in urban environments under non-regulated conditions. A user study was then performed to investigate the various dimensions of vehicle-to-pedestrian interaction leveraging objective and subjective metrics. Even though no interface clearly stood out over all the considered dimensions, one of the AR designs achieved state-of-the-art results in terms of safety and trust, at the cost of higher cognitive effort and lower intuitiveness compared to LED panels showing anthropomorphic features. Together with rankings on the various dimensions, indications about advantages and drawbacks of the various alternatives that emerged from this study could provide important information for next developments in the field.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا