No Arabic abstract
Levels one to three of driving automation systems~(DAS) are spreading fast. However, as the DAS functions become more and more sophisticated, not only the drivers driving skills will reduce, but also the problem of over-trust will become serious. If a driver has over-trust in the DAS, he/she will become not aware of hazards in time. To prevent the drivers over-trust in the DAS, this paper discusses the followings: 1) the definition of over-trust in the DAS, 2) a hypothesis of occurrence condition and occurrence process of over-trust in the DAS, and 3) a driving behavior model based on the trust in the DAS, the risk homeostasis theory, and the over-trust prevention human-machine interface.
As various driving automation system (DAS) are commonly used in the vehicle, the over-trust in the DAS may put the driver in the risk. In order to prevent the over-trust while driving, the trust state of the driver should be recognized. However, description variables of the trust state are not distinct. This paper assumed that the outward expressions of a driver can represent the trust state of him/her-self. The explicit behaviors when driving with DAS is seen as those outward expressions. In the experiment, a driving simulator with a driver monitoring system was used for simulating a vehicle with the adaptive cruise control (ACC) and observing the motion information of the driver. Results show that if the driver completely trusted in the ACC, then 1) the participants were likely to put their feet far away from the pedals; 2) the operational intervention of the driver will delay in dangerous situations. In the future, a machine learning model will be tried to predict the trust state by using the motion information of the driver.
The Thinking Wave is an ongoing development of visualization concepts showing the real-time effort and confidence of semi-autonomous vehicle (AV) systems. Offering drivers access to this information can inform their decision making, and enable them to handle the situation accordingly and takeover when necessary. Two different visualizations have been designed, Concept one, Tidal, demonstrates the AV systems effort through intensified activity of a simple graphic which fluctuates in speed and frequency. Concept two, Tandem, displays the effort of the AV system as well as the handling dynamic and shared responsibility between the driver and the vehicle system. Working collaboratively with mobility research teams at the University of Tokyo, we are prototyping and refining the Thinking Wave and its embodiments as we work towards building a testable version integrated into a driving simulator. The development of the thinking wave aims to calibrate trust by increasing the drivers knowledge and understanding of vehicle handling capacity. By enabling transparent communication of the AV systems capacity, we hope to empower AV-skeptic drivers and keep over-trusting drivers on alert in the case of an emergency takeover situation, in order to create a safer autonomous driving experience.
Recent developments in advanced driving assistance systems (ADAS) that rely on some level of autonomy have led the automobile industry and research community to investigate the impact they might have on driving performance. However, most of the research performed so far is based on simulated environments. In this study, we investigated the behavior of drivers in a vehicle with automated driving system (ADS) capabilities in a real-life driving scenario. We analyzed their response to a take over request (TOR) at two different driving speeds while being engaged in non-driving-related tasks (NDRT). Results from the performed experiments showed that driver reaction time to a TOR, gaze behavior and self-reported trust in automation were affected by the type of NDRT being concurrently performed and driver reaction time and gaze behavior additionally depended on the driving or vehicle speed at the time of TOR.
The purpose of this paper is to develop a shared control takeover strategy for smooth and safety control transition from an automation driving system to the human driver and to approve its positive impacts on drivers behavior and attitudes. A human-in-the-loop driving simulator experiment was conducted to evaluate the impact of the proposed shared control takeover strategy under different disengagement conditions. Results of thirty-two drivers showed shared control takeover strategy could improve safety performance at the aggregated level, especially at non-driving related disengagements. For more urgent disengagements caused by another vehicles sudden brake, a shared control strategy enlarges individual differences. The primary reason is that some drivers had higher self-reported mental workloads in response to the shared control takeover strategy. Therefore, shared control between driver and automation can involve drivers training to avoid mental overload when developing takeover strategies.
Objective: We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Background: Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. Method: Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. Results: Outcome bias and contrast effect significantly influence human operators trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him-/her-self. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. Conclusion: Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. Application: Understanding the trust adjustment process enables accurate prediction of the operators moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.