Do you want to publish a course? Click here

Toward quantifying trust dynamics: How people adjust their trust after moment-to-moment interaction with automation

123   0   0.0 ( 0 )
 Added by X. Jessie Yang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Objective: We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Background: Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. Method: Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. Results: Outcome bias and contrast effect significantly influence human operators trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him-/her-self. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. Conclusion: Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. Application: Understanding the trust adjustment process enables accurate prediction of the operators moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.

rate research

Read More

Levels one to three of driving automation systems~(DAS) are spreading fast. However, as the DAS functions become more and more sophisticated, not only the drivers driving skills will reduce, but also the problem of over-trust will become serious. If a driver has over-trust in the DAS, he/she will become not aware of hazards in time. To prevent the drivers over-trust in the DAS, this paper discusses the followings: 1) the definition of over-trust in the DAS, 2) a hypothesis of occurrence condition and occurrence process of over-trust in the DAS, and 3) a driving behavior model based on the trust in the DAS, the risk homeostasis theory, and the over-trust prevention human-machine interface.
As various driving automation system (DAS) are commonly used in the vehicle, the over-trust in the DAS may put the driver in the risk. In order to prevent the over-trust while driving, the trust state of the driver should be recognized. However, description variables of the trust state are not distinct. This paper assumed that the outward expressions of a driver can represent the trust state of him/her-self. The explicit behaviors when driving with DAS is seen as those outward expressions. In the experiment, a driving simulator with a driver monitoring system was used for simulating a vehicle with the adaptive cruise control (ACC) and observing the motion information of the driver. Results show that if the driver completely trusted in the ACC, then 1) the participants were likely to put their feet far away from the pedals; 2) the operational intervention of the driver will delay in dangerous situations. In the future, a machine learning model will be tried to predict the trust state by using the motion information of the driver.
Trust is a multilayered concept with critical relevance when it comes to introducing new technologies. Understanding how humans will interact with complex vehicle systems and preparing for the functional, societal and psychological aspects of autonomous vehicles entry into our cities is a pressing concern. Design tools can help calibrate the adequate and affordable level of trust needed for a safe and positive experience. This study focuses on passenger interactions capable of enhancing the system trustworthiness and data accuracy in future shared public transportation.
Trust in robots has been gathering attention from multiple directions, as it has special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall rapport between humans and robots. Unfortunately, the miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a users trust levels are not appropriate to the capabilities of the automation being used. Users can be: under-trusting the automation -- when they do not use the functionalities that the machine can perform correctly because of a lack of trust; or over-trusting the automation -- when, due to an excess of trust, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine drivers trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short-term interactions associated with these risk factors influence the dynamics of drivers trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust drivers trust levels. This capability could avoid under- and over-trusting, which could harm their safety or their performance.
127 - Yaohui Guo , X. Jessie Yang 2020
Trust in automation, or more recently trust in autonomy, has received extensive research attention in the past two decades. The majority of prior literature adopted a snapshot view of trust and typically evaluated trust through questionnaires administered at the end of an experiment. This snapshot view, however, does not acknowledge that trust is a time-variant variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model based on Beta distribution and learn its parameters using Bayesian inference. Our proposed model adheres to three major properties of trust dynamics reported in prior empirical studies. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a Root Mean Square Error (RMSE) of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinctive types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا