ترغب بنشر مسار تعليمي؟ اضغط هنا

We study the extent to which vibrotactile stimuli delivered to the head of a subject can serve as a platform for a brain computer interface (BCI) paradigm. Six head positions are used to evoke combined somatosensory and auditory (via the bone conduct ion effect) brain responses, in order to define a multimodal tactile and auditory brain computer interface (taBCI). Experimental results of subjects performing online taBCI, using stimuli with a moderately fast inter-stimulus interval (ISI), validate the taBCI paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies.
301 - Ingmar Steiner 2012
The importance of modeling speech articulation for high-quality audiovisual (AV) speech synthesis is widely acknowledged. Nevertheless, while state-of-the-art, data-driven approaches to facial animation can make use of sophisticated motion capture te chniques, the animation of the intraoral articulators (viz. the tongue, jaw, and velum) typically makes use of simple rules or viseme morphing, in stark contrast to the otherwise high quality of facial modeling. Using appropriate speech production data could significantly improve the quality of articulatory animation for AV synthesis.
This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.
Crowd algorithms often assume workers are inexperienced and thus fail to adapt as workers in the crowd learn a task. These assumptions fundamentally limit the types of tasks that systems based on such algorithms can handle. This paper explores how th e crowd learns and remembers over time in the context of human computation, and how more realistic assumptions of worker experience may be used when designing new systems. We first demonstrate that the crowd can recall information over time and discuss possible implications of crowd memory in the design of crowd algorithms. We then explore crowd learning during a continuous control task. Recent systems are able to disguise dynamic groups of workers as crowd agents to support continuous tasks, but have not yet considered how such agents are able to learn over time. We show, using a real-time gaming setting, that crowd agents can learn over time, and `remember by passing strategies from one generation of workers to the next, despite high turnover rates in the workers comprising them. We conclude with a discussion of future research directions for crowd memory and learning.
75 - Ingmar Steiner 2012
We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimens ional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.
Cognitive dissonance is the stress that comes from holding two conflicting thoughts simultaneously in the mind, usually arising when people are asked to choose between two detrimental or two beneficial options. In view of the well-established role of emotions in decision making, here we investigate whether the conventional structural models used to represent the relationships among basic emotions, such as the Circumplex model of affect, can describe the emotions of cognitive dissonance as well. We presented a questionnaire to 34 anonymous participants, where each question described a decision to be made among two conflicting motivations and asked the participants to rate analogically the pleasantness and the intensity of the experienced emotion. We found that the results were compatible with the predictions of the Circumplex model for basic emotions.
A basic component in Internet applications is the electronic mail and its various implications. The paper proposes a mechanism for automatically classifying emails and create dynamic groups that belong to these messages. Proposed mechanisms will be b ased on natural language processing techniques and will be designed to facilitate human-machine interaction in this direction.
204 - Bradly Alicea 2011
The relationship between physiological systems and modern electromechanical technologies is fast becoming intimate with high degrees of complex interaction. It can be argued that muscular function, limb movements, and touch perception serve superviso ry functions for movement control in motion and touch-based (e.g. manipulable) devices/interfaces and human-machine interfaces in general. To get at this hypothesis requires the use of novel techniques and analyses which demonstrate the multifaceted and regulatory role of adaptive physiological processes in these interactions. Neuromechanics is an approach that unifies the role of physiological function, motor performance, and environmental effects in determining human performance. A neuromechanical perspective will be used to explain the effect of environmental fluctuations on supervisory mechanisms, which leads to adaptive physiological responses. Three experiments are presented using two different types of virtual environment that allowed for selective switching between two sets of environmental forces. This switching was done in various ways to maximize the variety of results. Electromyography (EMG) and kinematic information contributed to the development of human performance-related measures. Both descriptive and specialized analyses were conducted: peak amplitude analysis, loop trace analysis, and the analysis of unmatched muscle power. Results presented here provide a window into performance under a range of conditions. These analyses also demonstrated myriad consequences for force-related fluctuations on dynamic physiological regulation. The findings presented here could be applied to the dynamic control of touch-based and movement-sensitive human-machine systems. In particular, the design of systems such as human-robotic systems, touch screen devices, and rehabilitative technologies could benefit from this research.
Historically the training of teams has been implemented using a face-to-face approach. In the past decade, on-line multiuser virtual environments have offered a solution for training teams whose members are geographically dispersed. In order to devel op on effective team training activity, a high sense of presence among the participant needs to be reached. Previous research studies reported being able to reach a high level of presence even when using inexpensive technology such as laptop and headset. This study evaluates the level of presence of ten subjects who have to perform a team-building activity in a multi-user virtual environment using a laptop computer and a headset. The authors are interested in determining which user characterizes, such as gender, age and knowledge of computers, have a strong correlation with the level of sense of presence. The results of this study showed that female participants were more likely to engage in the activity and perceived fewer negative effects. Participants who reported less negative effects such as feeling tired, dizzy, or experiencing eye strain during the team-building activity reached a higher level of sense of presence.
41 - Bradly Alicea 2011
In this paper, I will attempt to establish a framework for representation in virtual worlds that may allow for input data from many different scales and virtual physics to be merged. For example, a typical virtual environment must effectively handle user input, sensor data, and virtual world physics all in real- time. Merging all of these data into a single interactive system requires that we adapt approaches from topological methods such as n-dimensional relativistic representation. A number of hypothetical examples will be provided throughout the paper to clarify technical challenges that need to be overcome to realize this vision. The long-term goal of this work is that truly invariant representations will ultimately result from establishing formal, inclusive relationships between these different domains. Using this framework, incomplete information in one or more domains can be compensated for by parallelism and mappings within the virtual world representation. To introduce this approach, I will review recent developments in embodiment, virtual world technology, and neuroscience relevant to the control of virtual worlds. The next step will be to borrow ideas from fields such as brain science, applied mathematics, and cosmology to give proper perspective to this approach. A simple demonstration will then be given using an intuitive example of physical relativism. Finally, future directions for the application of this method will be considered.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا