ترغب بنشر مسار تعليمي؟ اضغط هنا

An Automated Vehicle (AV) like Me? The Impact of Personality Similarities and Differences between Humans and AVs

312   0   0.0 ( 0 )
 نشر من قبل Lionel Robert
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

To better understand the impacts of similarities and dissimilarities in human and AV personalities we conducted an experimental study with 443 individuals. Generally, similarities in human and AV personalities led to a higher perception of AV safety only when both were high in specific personality traits. Dissimilarities in human and AV personalities also yielded a higher perception of AV safety, but only when the AV was higher than the human in a particular personality trait.



قيم البحث

اقرأ أيضاً

Personality has been identified as a vital factor in understanding the quality of human robot interactions. Despite this the research in this area remains fragmented and lacks a coherent framework. This makes it difficult to understand what we know a nd identify what we do not. As a result our knowledge of personality in human robot interactions has not kept pace with the deployment of robots in organizations or in our broader society. To address this shortcoming, this paper reviews 83 articles and 84 separate studies to assess the current state of human robot personality research. This review: (1) highlights major thematic research areas, (2) identifies gaps in the literature, (3) derives and presents major conclusions from the literature and (4) offers guidance for future research.
Explanations given by automation are often used to promote automation adoption. However, it remains unclear whether explanations promote acceptance of automated vehicles (AVs). In this study, we conducted a within-subject experiment in a driving simu lator with 32 participants, using four different conditions. The four conditions included: (1) no explanation, (2) explanation given before or (3) after the AV acted and (4) the option for the driver to approve or disapprove the AVs action after hearing the explanation. We examined four AV outcomes: trust, preference for AV, anxiety and mental workload. Results suggest that explanations provided before an AV acted were associated with higher trust in and preference for the AV, but there was no difference in anxiety and workload. These results have important implications for the adoption of AVs.
In conditionally automated driving, drivers have difficulty in takeover transitions as they become increasingly decoupled from the operational level of driving. Factors influencing takeover performance, such as takeover lead time and the engagement o f non-driving related tasks, have been studied in the past. However, despite the important role emotions play in human-machine interaction and in manual driving, little is known about how emotions influence drivers takeover performance. This study, therefore, examined the effects of emotional valence and arousal on drivers takeover timeliness and quality in conditionally automated driving. We conducted a driving simulation experiment with 32 participants. Movie clips were played for emotion induction. Participants with different levels of emotional valence and arousal were required to take over control from automated driving, and their takeover time and quality were analyzed. Results indicate that positive valence led to better takeover quality in the form of a smaller maximum resulting acceleration and a smaller maximum resulting jerk. However, high arousal did not yield an advantage in takeover time. This study contributes to the literature by demonstrating how emotional valence and arousal affect takeover performance. The benefits of positive emotions carry over from manual driving to conditionally automated driving while the benefits of arousal do not.
Machine learning models are increasingly integrated into societally critical applications such as recidivism prediction and medical diagnosis, thanks to their superior predictive power. In these applications, however, full automation is often not des ired due to ethical and legal concerns. The research community has thus ventured into developing interpretable methods that explain machine predictions. While these explanations are meant to assist humans in understanding machine predictions and thereby allowing humans to make better decisions, this hypothesis is not supported in many recent studies. To improve human decision-making with AI assistance, we propose future directions for closing the gap between the efficacy of explanations and improvement in human performance.
We have revisited the electronic structure of infinite-layer RNiO$_2$ (R= La, Nd) in light of the recent discovery of superconductivity in Sr-doped NdNiO$_2$. From a comparison to their cuprate counterpart CaCuO$_2$, we derive essential facts related to their electronic structures, in particular the values for various hopping parameters and energy splittings, and the influence of the spacer cation. From this detailed comparison, we comment on expectations in regards to superconductivity. In particular, both materials exhibit a large ratio of longer-range hopping to near-neighbor hopping which should be conducive for superconductivity.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا