Do you want to publish a course? Click here

Explicit behaviors affected by drivers trust in a driving automation system

62   0   0.0 ( 0 )
 Added by HaiLong Liu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

As various driving automation system (DAS) are commonly used in the vehicle, the over-trust in the DAS may put the driver in the risk. In order to prevent the over-trust while driving, the trust state of the driver should be recognized. However, description variables of the trust state are not distinct. This paper assumed that the outward expressions of a driver can represent the trust state of him/her-self. The explicit behaviors when driving with DAS is seen as those outward expressions. In the experiment, a driving simulator with a driver monitoring system was used for simulating a vehicle with the adaptive cruise control (ACC) and observing the motion information of the driver. Results show that if the driver completely trusted in the ACC, then 1) the participants were likely to put their feet far away from the pedals; 2) the operational intervention of the driver will delay in dangerous situations. In the future, a machine learning model will be tried to predict the trust state by using the motion information of the driver.



rate research

Read More

Levels one to three of driving automation systems~(DAS) are spreading fast. However, as the DAS functions become more and more sophisticated, not only the drivers driving skills will reduce, but also the problem of over-trust will become serious. If a driver has over-trust in the DAS, he/she will become not aware of hazards in time. To prevent the drivers over-trust in the DAS, this paper discusses the followings: 1) the definition of over-trust in the DAS, 2) a hypothesis of occurrence condition and occurrence process of over-trust in the DAS, and 3) a driving behavior model based on the trust in the DAS, the risk homeostasis theory, and the over-trust prevention human-machine interface.
82 - A. Koegel , C. Furet , T. Suzuki 2021
The Thinking Wave is an ongoing development of visualization concepts showing the real-time effort and confidence of semi-autonomous vehicle (AV) systems. Offering drivers access to this information can inform their decision making, and enable them to handle the situation accordingly and takeover when necessary. Two different visualizations have been designed, Concept one, Tidal, demonstrates the AV systems effort through intensified activity of a simple graphic which fluctuates in speed and frequency. Concept two, Tandem, displays the effort of the AV system as well as the handling dynamic and shared responsibility between the driver and the vehicle system. Working collaboratively with mobility research teams at the University of Tokyo, we are prototyping and refining the Thinking Wave and its embodiments as we work towards building a testable version integrated into a driving simulator. The development of the thinking wave aims to calibrate trust by increasing the drivers knowledge and understanding of vehicle handling capacity. By enabling transparent communication of the AV systems capacity, we hope to empower AV-skeptic drivers and keep over-trusting drivers on alert in the case of an emergency takeover situation, in order to create a safer autonomous driving experience.
Objective: We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Background: Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. Method: Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. Results: Outcome bias and contrast effect significantly influence human operators trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him-/her-self. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. Conclusion: Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. Application: Understanding the trust adjustment process enables accurate prediction of the operators moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.
Autonomous driving systems have a pipeline of perception, decision, planning, and control. The decision module processes information from the perception module and directs the execution of downstream planning and control modules. On the other hand, the recent success of deep learning suggests that this pipeline could be replaced by end-to-end neural control policies, however, safety cannot be well guaranteed for the data-driven neural networks. In this work, we propose a hybrid framework to learn neural decisions in the classical modular pipeline through end-to-end imitation learning. This hybrid framework can preserve the merits of the classical pipeline such as the strict enforcement of physical and logical constraints while learning complex driving decisions from data. To circumvent the ambiguous annotation of human driving decisions, our method learns high-level driving decisions by imitating low-level control behaviors. We show in the simulation experiments that our modular driving agent can generalize its driving decision and control to various complex scenarios where the rule-based programs fail. It can also generate smoother and safer driving trajectories than end-to-end neural policies.
Recent developments in advanced driving assistance systems (ADAS) that rely on some level of autonomy have led the automobile industry and research community to investigate the impact they might have on driving performance. However, most of the research performed so far is based on simulated environments. In this study, we investigated the behavior of drivers in a vehicle with automated driving system (ADS) capabilities in a real-life driving scenario. We analyzed their response to a take over request (TOR) at two different driving speeds while being engaged in non-driving-related tasks (NDRT). Results from the performed experiments showed that driver reaction time to a TOR, gaze behavior and self-reported trust in automation were affected by the type of NDRT being concurrently performed and driver reaction time and gaze behavior additionally depended on the driving or vehicle speed at the time of TOR.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا