ترغب بنشر مسار تعليمي؟ اضغط هنا

Human-Data Interaction: The Human Face of the Data-Driven Society

134   0   0.0 ( 0 )
 نشر من قبل Hamed Haddadi
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The increasing generation and collection of personal data has created a complex ecosystem, often collaborative but sometimes combative, around companies and individuals engaging in the use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Interaction (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer science, statistics, sociology, psychology and behavioural economics. We expose the challenges that HDI raises, organised into three core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interested parties in the personal and big data ecosystems.



قيم البحث

اقرأ أيضاً

In recent years, the amount of information collected about human beings has increased dramatically. This development has been partially driven by individuals posting and storing data about themselves and friends using online social networks or collec ting their data for self-tracking purposes (quantified-self movement). Across the sciences, researchers conduct studies collecting data with an unprecedented resolution and scale. Using computational power combined with mathematical models, such rich datasets can be mined to infer underlying patterns, thereby providing insights into human nature. Much of the data collected is sensitive. It is private in the sense that most individuals would feel uncomfortable sharing their collected personal data publicly. For this reason, the need for solutions to ensure the privacy of the individuals generating data has grown alongside the data collection efforts. Out of all the massive data collection efforts, this paper focuses on efforts directly instrumenting human behavior, and notes that -- in many cases -- the privacy of participants is not sufficiently addressed. For example, study purposes are often not explicit, informed consent is ill-defined, and security and sharing protocols are only partially disclosed. This paper provides a survey of the work related to addressing privacy issues in research studies that collect detailed sensor data on human behavior. Reflections on the key problems and recommendations for future work are included. We hope the overview of the privacy-related practices in massive data collection studies can be used as a frame of reference for practitioners in the field. Although focused on data collection in an academic context, we believe that many of the challenges and solutions we identify are also relevant and useful for other domains where massive data collection takes place, including businesses and governments.
High-quality, usable, and effective software is essential for supporting astronomers in the discovery-focused tasks of data analysis and visualisation. As the volume, and perhaps more crucially, the velocity of astronomical data grows, the role of th e astronomer is changing. There is now an increased reliance on automated and autonomous discovery and decision-making workflows rather than visual inspection. We assert the need for an improved understanding of how astronomers (humans) currently make visual discoveries from data. This insight is a critical element for the future design, development and effective use of cyber-human discovery systems, where astronomers work in close collaboration with automated systems to gain understanding from continuous, real-time data streams. We discuss how relevant human performance data could be gathered, specifically targeting the domains of expertise and skill at visual discovery, and the identification and management of cognitive factors. By looking to other disciplines where human performance is assessed and measured, we propose four early-stage applications that would: (1) allow astronomers to evaluate, and potentially improve, their own visual discovery skills; (2) support just-in-time coaching; (3) enable talent identification; and (4) result in user interfaces that automatically respond to skill level and cognitive state. Throughout, we advocate for the importance of user studies and the incorporation of participatory design and co-design practices into the planning, implementation and evaluation of alternative user interfaces and visual discovery environments.
Technology is an extremely potent tool that can be leveraged for human development and social good. Owing to the great importance of environment and human psychology in driving human behavior, and the ubiquity of technology in modern life, there is a need to leverage the insights and capabilities of both fields together for nudging people towards a behavior that is optimal in some sense (personal or social). In this regard, the field of persuasive technology, which proposes to infuse technology with appropriate design and incentives using insights from psychology, behavioral economics, and human-computer interaction holds a lot of promise. Whilst persuasive technology is already being developed and is at play in many commercial applications, it can have the great social impact in the field of Information and Communication Technology for Development (ICTD) which uses Information and Communication Technology (ICT) for human developmental ends such as education and health. In this paper we will explore what persuasive technology is and how it can be used for the ends of human development. To develop the ideas in a concrete setting, we present a case study outlining how persuasive technology can be used for human development in Pakistan, a developing South Asian country, that suffers from many of the problems that plague typical developing country.
226 - Zeeshan Ahmad , Naimul Khan 2021
Convolutional Neural Networks (CNNs) are successful deep learning models in the field of computer vision. To get the maximum advantage of CNN model for Human Action Recognition (HAR) using inertial sensor data, in this paper, we use 4 types of spatia l domain methods for transforming inertial sensor data to activity images, which are then utilized in a novel fusion framework. These four types of activity images are Signal Images (SI), Gramian Angular Field (GAF) Images, Markov Transition Field (MTF) Images and Recurrence Plot (RP) Images. Furthermore, for creating a multimodal fusion framework and to exploit activity image, we made each type of activity images multimodal by convolving with two spatial domain filters : Prewitt filter and High-boost filter. Resnet-18, a CNN model, is used to learn deep features from multi-modalities. Learned features are extracted from the last pooling layer of each ReNet and then fused by canonical correlation based fusion (CCF) for improving the accuracy of human action recognition. These highly informative features are served as input to a multiclass Support Vector Machine (SVM). Experimental results on three publicly available inertial datasets show the superiority of the proposed method over the current state-of-the-art.
Human and AI are increasingly interacting and collaborating to accomplish various complex tasks in the context of diverse application domains (e.g., healthcare, transportation, and creative design). Two dynamic, learning entities (AI and human) have distinct mental model, expertise, and ability; such fundamental difference/mismatch offers opportunities for bringing new perspectives to achieve better results. However, this mismatch can cause unexpected failure and result in serious consequences. While recent research has paid much attention to enhancing interpretability or explainability to allow machine to explain how it makes a decision for supporting humans, this research argues that there is urging the need for both human and AI should develop specific, corresponding ability to interact and collaborate with each other to form a human-AI team to accomplish superior results. This research introduces a conceptual framework called Co-Learning, in which people can learn with/from and grow with AI partners over time. We characterize three key concepts of co-learning: mutual understanding, mutual benefits, and mutual growth for facilitating human-AI collaboration on complex problem solving. We will present proof-of-concepts to investigate whether and how our approach can help human-AI team to understand and benefit each other, and ultimately improve productivity and creativity on creative problem domains. The insights will contribute to the design of Human-AI collaboration.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا