No Arabic abstract
Loneliness is a widely affecting mental health symptom and can be mediated by and co-vary with patterns of social exposure. Using momentary survey and smartphone sensing data collected from 129 Android-using college student participants over three weeks, we (1) investigate and uncover the relations between momentary loneliness experience and companionship type and (2) propose and validate novel geosocial features of smartphone-based Bluetooth and GPS data for predicting loneliness and companionship type in real time. We base our features on intuitions characterizing the quantity and spatiotemporal predictability of an individuals Bluetooth encounters and GPS location clusters to capture personal significance of social exposure scenarios conditional on their temporal distribution and geographic patterns. We examine our features statistical correlation with momentary loneliness through regression analyses and evaluate their predictive power using a sliding window prediction procedure. Our features achieved significant performance improvement compared to baseline for predicting both momentary loneliness and companionship type, with the effect stronger for the loneliness prediction task. As such we recommend incorporation and further evaluation of our geosocial features proposed in this study in future mental health sensing and context-aware computing applications.
Combining low cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. We present a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system - Smartphone Brain Scanner - combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully mobile system for real-time 3D EEG imaging. We discuss the benefits and challenges of a fully portable system, including technical limitations as well as real-time reconstruction of 3D images of brain activity. We present examples of the brain activity captured in a simple experiment involving imagined finger tapping, showing that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using a off-the-shelf consumer neuroheadset is lower compared to that obtained using high density standard EEG equipment, we propose that mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings.
Traditionally, the regime of mental healthcare has followed an episodic psychotherapy model wherein patients seek care from a provider through a prescribed treatment plan developed over multiple provider visits. Recent advances in wearable and mobile technology have generated increased interest in digital mental healthcare that enables individuals to address episodic mental health symptoms. However, these efforts are typically reactive and symptom-focused and do not provide comprehensive, wrap-around, customized treatments that capture an individuals holistic mental health model as it unfolds over time. Recognizing that each individual is unique, we present the notion of Personalized Mental Health Navigation (MHN): a therapist-in-the-loop, cybernetic goal-based system that deploys a continuous cyclic loop of measurement, estimation, guidance, to steer the individuals mental health state towards a healthy zone. We outline the major components of MHN that is premised on the development of an individuals personal mental health state, holistically represented by a high-dimensional cover of multiple knowledge layers such as emotion, biological patterns, sociology, behavior, and cognition. We demonstrate the feasibility of the personalized MHN approach via a 12-month pilot case study for holistic stress management in college students and highlight an instance of a therapist-in-the-loop intervention using MHN for monitoring, estimating, and proactively addressing moderately severe depression over a sustained period of time. We believe MHN paves the way to transform mental healthcare from the current passive, episodic, reactive process (where individuals seek help to address symptoms that have already manifested) to a continuous and navigational paradigm that leverages a personalized model of the individual, promising to deliver timely interventions to individuals in a holistic manner.
The trouble with data is that often it provides only an imperfect representation of the phenomenon of interest. When reading and interpreting data, personal knowledge about the data plays an important role. Data visualization, however, has neither a concept defining personal knowledge about datasets, nor the methods or tools to robustly integrate them into an analysis process, thus hampering analysts ability to express their personal knowledge about datasets, and others to learn from such knowledge. In this work, we define such personal knowledge about datasets as data hunches and elevate this knowledge to another form of data that can be externalized, visualized, and used for collaboration. We establish the implications of data hunches and provide a design space for externalizing and communicating data hunches through visualization techniques. We envision such a design space will empower users to externalize their personal knowledge and support the ability to learn from others data hunches.
In this paper we present the first population-level, city-scale analysis of application usage on smartphones. Using deep packet inspection at the network operator level, we obtained a geo-tagged dataset with more than 6 million unique devices that launched more than 10,000 unique applications across the city of Shanghai over one week. We develop a technique that leverages transfer learning to predict which applications are most popular and estimate the whole usage distribution based on the Point of Interest (POI) information of that particular location. We demonstrate that our technique has an 83.0% hitrate in successfully identifying the top five popular applications, and a 0.15 RMSE when estimating usage with just 10% sampled sparse data. It outperforms by about 25.7% over the existing state-of-the-art approaches. Our findings pave the way for predicting which apps are relevant to a user given their current location, and which applications are popular where. The implications of our findings are broad: it enables a range of systems to benefit from such timely predictions, including operating systems, network operators, appstores, advertisers, and service providers.
Due to a lack of medical resources or oral health awareness, oral diseases are often left unexamined and untreated, affecting a large population worldwide. With the advent of low-cost, sensor-equipped smartphones, mobile apps offer a promising possibility for promoting oral health. However, to the best of our knowledge, no mobile health (mHealth) solutions can directly support a user to self-examine their oral health condition. This paper presents OralCam, the first interactive app that enables end-users self-examination of five common oral conditions (diseases or early disease signals) by taking smartphone photos of ones oral cavity. OralCam allows a user to annotate additional information (e.g. living habits, pain, and bleeding) to augment the input image, and presents the output hierarchically, probabilistically and with visual explanations to help a laymen user understand examination results. Developed on our in-house dataset that consists of 3,182 oral photos annotated by dental experts, our deep learning based framework achieved an average detection sensitivity of 0.787 over five conditions with high localization accuracy. In a week-long in-the-wild user study (N=18), most participants had no trouble using OralCam and interpreting the examination results. Two expert interviews further validate the feasibility of OralCam for promoting users awareness of oral health.