Do you want to publish a course? Click here

Collaborative model of interaction and Unmanned Vehicle Systems interface

228   0   0.0 ( 0 )
 Added by Sylvie Saget
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

The interface for the next generation of Unmanned Vehicle Systems should be an interface with multi-modal displays and input controls. Then, the role of the interface will not be restricted to be a support of the interactions between the ground operator and vehicles. Interface must take part in the interaction management too. In this paper, we show that recent works in pragmatics and philosophy provide a suitable theoretical framework for the next generation of UV Systems interface. We concentrate on two main aspects of the collaborative model of interaction based on acceptance: multi-strategy approach for communicative act generation and interpretation and communicative alignment.



rate research

Read More

After presenting the broad context of authority sharing, we outline how introducing more natural interaction in the design of the ground operator interface of UV systems should help in allowing a single operator to manage the complexity of his/her task. Introducing new modalities is one one of the means in the realization of our vision of next- generation GOI. A more fundamental aspect resides in the interaction manager which should help balance the workload of the operator between mission and interaction, notably by applying a multi-strategy approach to generation and interpretation. We intend to apply these principles to the context of the Smaart prototype, and in this perspective, we illustrate how to characterize the workload associated with a particular operational situation.
Human collaborators can effectively communicate with their partners to finish a common task by inferring each others mental states (e.g., goals, beliefs, and desires). Such mind-aware communication minimizes the discrepancy among collaborators mental states, and is crucial to the success in human ad-hoc teaming. We believe that robots collaborating with human users should demonstrate similar pedagogic behavior. Thus, in this paper, we propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations, where the robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications based on its online Bayesian inference of the users mental state. To evaluate our framework, we conduct a user study on a real-time human-robot cooking task. Experimental results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot. Code and video demos are available on our project website: https://xfgao.github.io/xCookingWeb/.
Autonomous driving has been the subject of increased interest in recent years both in industry and in academia. Serious efforts are being pursued to address legal, technical and logistical problems and make autonomous cars a viable option for everyday transportation. One significant challenge is the time and effort required for the verification and validation of the decision and control algorithms employed in these vehicles to ensure a safe and comfortable driving experience. Hundreds of thousands of miles of driving tests are required to achieve a well calibrated control system that is capable of operating an autonomous vehicle in an uncertain traffic environment where multiple interactions between vehicles and drivers simultaneously occur. Traffic simulators where these interactions can be modeled and represented with reasonable fidelity can help decrease the time and effort necessary for the development of the autonomous driving control algorithms by providing a venue where acceptable initial control calibrations can be achieved quickly and safely before actual road tests. In this paper, we present a game theoretic traffic model that can be used to 1) test and compare various autonomous vehicle decision and control systems and 2) calibrate the parameters of an existing control system. We demonstrate two example case studies, where, in the first case, we test and quantitatively compare two autonomous vehicle control systems in terms of their safety and performance, and, in the second case, we optimize the parameters of an autonomous vehicle control system, utilizing the proposed traffic model and simulation environment.
Multimodal integration is an important process in perceptual decision-making. In humans, this process has often been shown to be statistically optimal, or near optimal: sensory information is combined in a fashion that minimises the average error in perceptual representation of stimuli. However, sometimes there are costs that come with the optimization, manifesting as illusory percepts. We review audio-visual facilitations and illusions that are products of multisensory integration, and the computational models that account for these phenomena. In particular, the same optimal computational model can lead to illusory percepts, and we suggest that more studies should be needed to detect and mitigate these illusions, as artefacts in artificial cognitive systems. We provide cautionary considerations when designing artificial cognitive systems with the view of avoiding such artefacts. Finally, we suggest avenues of research towards solutions to potential pitfalls in system design. We conclude that detailed understanding of multisensory integration and the mechanisms behind audio-visual illusions can benefit the design of artificial cognitive systems.
235 - Fadi Badra 2008
The Semantic Web is becoming more and more a reality, as the required technologies have reached an appropriate level of maturity. However, at this stage, it is important to provide tools facilitating the use and deployment of these technologies by end-users. In this paper, we describe EdHibou, an automatically generated, ontology-based graphical user interface that integrates in a semantic portal. The particularity of EdHibou is that it makes use of OWL reasoning capabilities to provide intelligent features, such as decision support, upon the underlying ontology. We present an application of EdHibou to medical decision support based on a formalization of clinical guidelines in OWL and show how it can be customized thanks to an ontology of graphical components.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا