No Arabic abstract
With the widespread use of mobile phones, users can share their location anytime, anywhere, as a form of check-in data. These data reflect user preferences. Furthermore, the preference rules for different users vary. How to discover a users preference from their related information and how to validate whether a preference model is suited to a user is important for providing a suitable service to the user. This study provides four main contributions. First, multiple preference models from different views for each user are constructed. Second, an algorithm is proposed to validate whether a preference model is applicable to the user by calculating the stability value of the users long-term check-in data for each model. Third, a unified model, i.e., a multi-channel convolutional neural network is used to characterize this applicability. Finally, three datasets from multiple sources are used to verify the validity of the method, the results of which show the effectiveness of the method.
This paper contributes to the human-machine interface community in two ways: as a critique of the closed-loop AC (augmented cognition) approach, and as a way to introduce concepts from complex systems and systems physiology into the field. Of particular relevance is a comparison of the inverted-U (or Gaussian) model of optimal performance and multidimensional fitness landscape model. Hypothetical examples will be given from human physiology and learning and memory. In particular, a four-step model will be introduced that is proposed as a better means to characterize multivariate systems during behavioral processes with complex dynamics such as learning. Finally, the alternate approach presented herein is considered as a preferable design alternate in human-machine systems. It is within this context that future directions are discussed.
We present an approach utilizing Topological Data Analysis to study the structure of face poses used in affective computing, i.e., the process of recognizing human emotion. The approach uses a conditional comparison of different emotions, both respective and irrespective of time, with multiple topological distance metrics, dimension reduction techniques, and face subsections (e.g., eyes, nose, mouth, etc.). The results confirm that our topology-based approach captures known patterns, distinctions between emotions, and distinctions between individuals, which is an important step towards more robust and explainable emotion recognition by machines.
A solid methodology to understand human perception and preferences in human-robot interaction (HRI) is crucial in designing real-world HRI. Social cognition posits that the dimensions Warmth and Competence are central and universal dimensions characterizing other humans. The Robotic Social Attribute Scale (RoSAS) proposes items for those dimensions suitable for HRI and validated them in a visual observation study. In this paper we complement the validation by showing the usability of these dimensions in a behavior based, physical HRI study with a fully autonomous robot. We compare the findings with the popular Godspeed dimensions Animacy, Anthropomorphism, Likeability, Perceived Intelligence and Perceived Safety. We found that Warmth and Competence, among all RoSAS and Godspeed dimensions, are the most important predictors for human preferences between different robot behaviors. This predictive power holds even when there is no clear consensus preference or significant factor difference between conditions.
Deficit of attention, anxiety, sleep disorders are some of the problems which affect many persons. As these issues can evolve into severe conditions, more factors should be taken into consideration. The paper proposes a conception which aims to help students to enhance their brain performance. An electrocephalogram headset is used to trigger the brainwaves, along with a web application which manages the input data which comes from the headset and from the user. Factors like current activity, mood, focus, stress, relaxation, engagement, excitement and interest are provided in numerical format through the use of the headset. The users offer information about their activities related to relaxation, listening to music, watching a movie, and studying. Based on the analysis, it was found that the users consider the application easy to use. As the users are more equilibrated emotionally, their results are improved. This allowed the persons to be more confident on themselves. In the case of students, the neurofeedback can be studied for the better sport and artistic performances, including the case of the attention deficit hyperactivity disorder. Aptitudes for a subject can be determined based on the relevant generated brainwaves. The learning environment is an important factor during the analysis of the results. Teachers, professors, students and parents can collaborate and, based on the gathered data, new teaching methods can be adopted in the classroom and at home. The proposed solution can guide the students while studying, as well as the persons who wish to be more productive while solving their tasks.
People increasingly wear smartwatches that can track a wide variety of data. However, it is currently unknown which data people consume and how it is visualized. To better ground research on smartwatch visualization, it is important to understand the current use of these representation types on smartwatches, and to identify missed visualization opportunities. We present the findings of a survey with 237 smartwatch wearers, and assess the types of data and representations commonly displayed on watch faces. We found a predominant display of health & fitness data, with icons accompanied by text being the most frequent representation type. Combining these results with a further analysis of online searches of watch faces and the data tracked on smartwatches that are not commonly visualized, we discuss opportunities for visualization research.