Do you want to publish a course? Click here

The Smartphone Brain Scanner: A Mobile Real-time Neuroimaging System

387   0   0.0 ( 0 )
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

Combining low cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. We present a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system - Smartphone Brain Scanner - combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully mobile system for real-time 3D EEG imaging. We discuss the benefits and challenges of a fully portable system, including technical limitations as well as real-time reconstruction of 3D images of brain activity. We present examples of the brain activity captured in a simple experiment involving imagined finger tapping, showing that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using a off-the-shelf consumer neuroheadset is lower compared to that obtained using high density standard EEG equipment, we propose that mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings.



rate research

Read More

Loneliness is a widely affecting mental health symptom and can be mediated by and co-vary with patterns of social exposure. Using momentary survey and smartphone sensing data collected from 129 Android-using college student participants over three weeks, we (1) investigate and uncover the relations between momentary loneliness experience and companionship type and (2) propose and validate novel geosocial features of smartphone-based Bluetooth and GPS data for predicting loneliness and companionship type in real time. We base our features on intuitions characterizing the quantity and spatiotemporal predictability of an individuals Bluetooth encounters and GPS location clusters to capture personal significance of social exposure scenarios conditional on their temporal distribution and geographic patterns. We examine our features statistical correlation with momentary loneliness through regression analyses and evaluate their predictive power using a sliding window prediction procedure. Our features achieved significant performance improvement compared to baseline for predicting both momentary loneliness and companionship type, with the effect stronger for the loneliness prediction task. As such we recommend incorporation and further evaluation of our geosocial features proposed in this study in future mental health sensing and context-aware computing applications.
An increasing need of running Convolutional Neural Network (CNN) models on mobile devices with limited computing power and memory resource encourages studies on efficient model design. A number of efficient architectures have been proposed in recent years, for example, MobileNet, ShuffleNet, and MobileNetV2. However, all these models are heavily dependent on depthwise separable convolution which lacks efficient implementation in most deep learning frameworks. In this study, we propose an efficient architecture named PeleeNet, which is built with conventional convolution instead. On ImageNet ILSVRC 2012 dataset, our proposed PeleeNet achieves a higher accuracy and over 1.8 times faster speed than MobileNet and MobileNetV2 on NVIDIA TX2. Meanwhile, PeleeNet is only 66% of the model size of MobileNet. We then propose a real-time object detection system by combining PeleeNet with Single Shot MultiBox Detector (SSD) method and optimizing the architecture for fast speed. Our proposed detection system2, named Pelee, achieves 76.4% mAP (mean average precision) on PASCAL VOC2007 and 22.4 mAP on MS COCO dataset at the speed of 23.6 FPS on iPhone 8 and 125 FPS on NVIDIA TX2. The result on COCO outperforms YOLOv2 in consideration of a higher precision, 13.6 times lower computational cost and 11.3 times smaller model size.
71 - Gregoire Cattan 2020
A brain-computer interface (BCI) based on electroencephalography (EEG) is a promising technology for enhancing virtual reality (VR) applications-in particular, for gaming. We focus on the so-called P300-BCI, a stable and accurate BCI paradigm relying on the recognition of a positive event-related potential (ERP) occurring in the EEG about 300 ms post-stimulation. We implemented a basic version of such a BCI displayed on an ordinary and affordable smartphone-based head-mounted VR device: that is, a mobile and passive VR system (with no electronic components beyond the smartphone). The mobile phone performed the stimuli presentation, EEG synchronization (tagging) and feedback display. We compared the ERPs and the accuracy of the BCI on the VR device with a traditional BCI running on a personal computer (PC). We also evaluated the impact of subjective factors on the accuracy. The study was within-subjects, with 21 participants and one session in each modality. No significant difference in BCI accuracy was found between the PC and VR systems, although the P200 ERP was significantly wider and larger in the VR system as compared to the PC system.
The accuracy of smartphone-based positioning methods using WiFi usually suffers from ranging errors caused by non-line-of-sight (NLOS) conditions. Previous research usually exploits several statistical features from a long time series (hundreds of samples) of WiFi received signal strength (RSS) or WiFi round-trip time (RTT) to achieve a high identification accuracy. However, the long time series or large sample size attributes to high power and time consumption in data collection for both training and testing. This will also undoubtedly be detrimental to user experience as the waiting time of getting enough samples is quite long. Therefore, this paper proposes a new real-time NLOS/LOS identification method for smartphone-based indoor positioning system using WiFi RTT and RSS. Based on our extensive analysis of RSS and RTT features, a machine learning-based method using random forest was chosen and developed to separate the samples for NLOS/LOS conditions. Experiments in different environments show that our method achieves a discrimination accuracy of about 94% with a sample size of 10. Considering the theoretically shortest WiFi ranging interval of 100ms of the RTT-enabled smartphones, our algorithm is able to provide the shortest latency of 1s to get the testing result among all of the state-of-art methods.
Currently, mobile robots are developing rapidly and are finding numerous applications in industry. However, there remain a number of problems related to their practical use, such as the need for expensive hardware and their high power consumption levels. In this study, we propose a navigation system that is operable on a low-end computer with an RGB-D camera and a mobile robot platform for the operation of an integrated autonomous driving system. The proposed system does not require LiDARs or a GPU. Our raw depth image ground segmentation approach extracts a traversability map for the safe driving of low-body mobile robots. It is designed to guarantee real-time performance on a low-cost commercial single board computer with integrated SLAM, global path planning, and motion planning. Running sensor data processing and other autonomous driving functions simultaneously, our navigation method performs rapidly at a refresh rate of 18Hz for control command, whereas other systems have slower refresh rates. Our method outperforms current state-of-the-art navigation approaches as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in a residential lobby.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا