Do you want to publish a course? Click here

Predicting Demographics, Moral Foundations, and Human Values from Digital Behaviors

58   0   0.0 ( 0 )
 Added by Kyriaki Kalimeri
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Personal electronic devices including smartphones give access to behavioural signals that can be used to learn about the characteristics and preferences of individuals. In this study, we explore the connection between demographic and psychological attributes and the digital behavioural records, for a cohort of 7,633 people, closely representative of the US population with respect to gender, age, geographical distribution, education, and income. Along with the demographic data, we collected self-reported assessments on validated psychometric questionnaires for moral traits and basic human values and combined this information with passively collected multi-modal digital data from web browsing behaviour and smartphone usage. A machine learning framework was then designed to infer both the demographic and psychological attributes from the behavioural data. In a cross-validated setting, our models predicted demographic attributes with good accuracy as measured by the weighted AUROC score (Area Under the Receiver Operating Characteristic), but were less performant for the moral traits and human values. These results call for further investigation since they are still far from unveiling individuals psychological fabric. This connection, along with the most predictive features that we provide for each attribute, might prove useful for designing personalised services, communication strategies, and interventions, and can be used to sketch a portrait of people with a similar worldview.

rate research

Read More

Music is an essential component in our everyday lives and experiences, as it is a way that we use to express our feelings, emotions and cultures. In this study, we explore the association between music genre preferences, demographics and moral values by exploring self-reported data from an online survey administered in Canada. Participants filled in the moral foundations questionnaire, while they also provided their basic demographic information, and music preferences. Here, we predict the moral values of the participants inferring on their musical preferences employing classification and regression techniques. We also explored the predictive power of features estimated from factor analysis on the music genres, as well as the generalist/specialist (GS) score for revealing the diversity of musical choices for each user. Our results show the importance of music in predicting a persons moral values (.55-.69 AUROC); while knowledge of basic demographic features such as age and gender is enough to increase the performance (.58-.71 AUROC).
How to attribute responsibility for autonomous artificial intelligence (AI) systems actions has been widely debated across the humanities and social science disciplines. This work presents two experiments ($N$=200 each) that measure peoples perceptions of eight different notions of moral responsibility concerning AI and human agents in the context of bail decision-making. Using real-life adapted vignettes, our experiments show that AI agents are held causally responsible and blamed similarly to human agents for an identical task. However, there was a meaningful difference in how people perceived these agents moral responsibility; human agents were ascribed to a higher degree of present-looking and forward-looking notions of responsibility than AI agents. We also found that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature. We discuss policy and HCI implications of these findings, such as the need for explainable AI in high-stakes scenarios.
Autonomous Vehicles (AVs) raise important social and ethical concerns, especially about accountability, dignity, and justice. We focus on the specific concerns arising from how AV technology will affect the lives and livelihoods of professional and semi-professional drivers. Whereas previous studies of such concerns have focused on the opinions of experts, we seek to understand these ethical and societal challenges from the perspectives of the drivers themselves. To this end, we adopted a qualitative research methodology based on semi-structured interviews. This is an established social science methodology that helps understand the core concerns of stakeholders in depth by avoiding the biases of superficial methods such as surveys. We find that whereas drivers agree with the experts that AVs will significantly impact transportation systems, they are apprehensive about the prospects for their livelihoods and dismiss the suggestions that driving jobs are unsatisfying and their profession does not merit protection. By showing how drivers differ from the experts, our study has ramifications beyond AVs to AI and other advanced technologies. Our findings suggest that qualitative research applied to the relevant, especially disempowered, stakeholders is essential to ensuring that new technologies are introduced ethically.
One-shot anonymous unselfishness in economic games is commonly explained by social preferences, which assume that people care about the monetary payoffs of others. However, during the last ten years, research has shown that different types of unselfish behaviour, including cooperation, altruism, truth-telling, altruistic punishment, and trustworthiness are in fact better explained by preferences for following ones own personal norms - internal standards about what is right or wrong in a given situation. Beyond better organising various forms of unselfish behaviour, this moral preference hypothesis has recently also been used to increase charitable donations, simply by means of interventions that make the morality of an action salient. Here we review experimental and theoretical work dedicated to this rapidly growing field of research, and in doing so we outline mathematical foundations for moral preferences that can be used in future models to better understand selfless human actions and to adjust policies accordingly. These foundations can also be used by artificial intelligence to better navigate the complex landscape of human morality.
Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individuals life. These results raise critical new questions regarding peoples moral responses to robots and the design of autonomous moral agents.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا