Do you want to publish a course? Click here

Human Activity Analysis and Recognition from Smartphones using Machine Learning Techniques

169   0   0.0 ( 0 )
 Added by Jakaria Rabbi
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Human Activity Recognition (HAR) is considered a valuable research topic in the last few decades. Different types of machine learning models are used for this purpose, and this is a part of analyzing human behavior through machines. It is not a trivial task to analyze the data from wearable sensors for complex and high dimensions. Nowadays, researchers mostly use smartphones or smart home sensors to capture these data. In our paper, we analyze these data using machine learning models to recognize human activities, which are now widely used for many purposes such as physical and mental health monitoring. We apply different machine learning models and compare performances. We use Logistic Regression (LR) as the benchmark model for its simplicity and excellent performance on a dataset, and to compare, we take Decision Tree (DT), Support Vector Machine (SVM), Random Forest (RF), and Artificial Neural Network (ANN). Additionally, we select the best set of parameters for each model by grid search. We use the HAR dataset from the UCI Machine Learning Repository as a standard dataset to train and test the models. Throughout the analysis, we can see that the Support Vector Machine performed (average accuracy 96.33%) far better than the other methods. We also prove that the results are statistically significant by employing statistical significance test methods.



rate research

Read More

311 - H.D. Nguyen , K.P. Tran , X. Zeng 2019
Recent years have witnessed the rapid development of human activity recognition (HAR) based on wearable sensor data. One can find many practical applications in this area, especially in the field of health care. Many machine learning algorithms such as Decision Trees, Support Vector Machine, Naive Bayes, K-Nearest Neighbor, and Multilayer Perceptron are successfully used in HAR. Although these methods are fast and easy for implementation, they still have some limitations due to poor performance in a number of situations. In this paper, we propose a novel method based on the ensemble learning to boost the performance of these machine learning methods for HAR.
What makes a task relatively more or less difficult for a machine compared to a human? Much AI/ML research has focused on expanding the range of tasks that machines can do, with a focus on whether machines can beat humans. Allowing for differences in scale, we can seek interesting (anomalous) pairs of tasks T, T. We define interesting in this way: The harder to learn relation is reversed when comparing human intelligence (HI) to AI. While humans seems to be able to understand problems by formulating rules, ML using neural networks does not rely on constructing rules. We discuss a novel approach where the challenge is to perform well under rules that have been created by human beings. We suggest that this provides a rigorous and precise pathway for understanding the difference between the two kinds of learning. Specifically, we suggest a large and extensible class of learning tasks, formulated as learning under rules. With these tasks, both the AI and HI will be studied with rigor and precision. The immediate goal is to find interesting groundtruth rule pairs. In the long term, the goal will be to understand, in a generalizable way, what distinguishes interesting pairs from ordinary pairs, and to define saliency behind interesting pairs. This may open new ways of thinking about AI, and provide unexpected insights into human learning.
Human Activity Recognition (HAR) based on IMU sensors is a crucial area in ubiquitous computing. Because of the trend of deploying AI on IoT devices or smartphones, more researchers are designing different HAR models for embedded devices. Deployment of models in embedded devices can help enhance the efficiency of HAR. We propose a multi-level HAR modeling pipeline called Stage-Logits-Memory Distillation (SMLDist) for constructing deep convolutional HAR models with embedded hardware support. SMLDist includes stage distillation, memory distillation, and logits distillation. Stage distillation constrains the learning direction of the intermediate features. The teacher model teaches the student models how to explain and store the inner relationship among high-dimensional features based on Hopfield networks in memory distillation. Logits distillation builds logits distilled by a smoothed conditional rule to preserve the probability distribution and enhance the softer target accuracy. We compare the accuracy, F1 macro score, and energy cost on embedded platforms of a MobileNet V3 model built by SMLDist with various state-of-the-art HAR frameworks. The product model has a good balance with robustness and efficiency. SMLDist can also compress models with a minor performance loss at an equal compression ratio to other advanced knowledge distillation methods on seven public datasets.
Automated machine learning (AutoML) aims to find optimal machine learning solutions automatically given a machine learning problem. It could release the burden of data scientists from the multifarious manual tuning process and enable the access of domain experts to the off-the-shelf machine learning solutions without extensive experience. In this paper, we review the current developments of AutoML in terms of three categories, automated feature engineering (AutoFE), automated model and hyperparameter learning (AutoMHL), and automated deep learning (AutoDL). State-of-the-art techniques adopted in the three categories are presented, including Bayesian optimization, reinforcement learning, evolutionary algorithm, and gradient-based approaches. We summarize popular AutoML frameworks and conclude with current open challenges of AutoML.
Among the most impressive recent applications of neural decoding is the visual representation decoding, where the category of an object that a subject either sees or imagines is inferred by observing his/her brain activity. Even though there is an increasing interest in the aforementioned visual representation decoding task, there is no extensive study of the effect of using different machine learning models on the decoding accuracy. In this paper we provide an extensive evaluation of several machine learning models, along with different similarity metrics, for the aforementioned task, drawing many interesting conclusions. That way, this paper a) paves the way for developing more advanced and accurate methods and b) provides an extensive and easily reproducible baseline for the aforementioned decoding task.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا