ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards Automated Fatigue Assessment using Wearable Sensing and Mixed-Effects Models

145   0   0.0 ( 0 )
 نشر من قبل Yang Bai
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Fatigue is a broad, multifactorial concept that includes the subjective perception of reduced physical and mental energy levels. It is also one of the key factors that strongly affect patients health-related quality of life. To date, most fatigue assessment methods were based on self-reporting, which may suffer from many factors such as recall bias. To address this issue, in this work, we recorded multi-modal physiological data (including ECG, accelerometer, skin temperature and respiratory rate, as well as demographic information such as age, BMI) in free-living environments and developed automated fatigue assessment models. Specifically, we extracted features from each modality and employed the random forest-based mixed-effects models, which can take advantage of the demographic information for improved performance. We conducted experiments on our collected dataset, and very promising preliminary results were achieved. Our results suggested ECG played an important role in the fatigue assessment tasks.

قيم البحث

اقرأ أيضاً

Obesity and being over-weight add to the risk of some major life threatening diseases. According to W.H.O., a considerable population suffers from these disease whereas poor nutrition plays an important role in this context. Traditional food activity monitoring systems like Food Diaries allow manual record keeping of eating activities over time, and conduct nutrition analysis. However, these systems are prone to the problems of manual record keeping and biased-reporting. Therefore, recently, the research community has focused on designing automatic food monitoring systems since the last decade which consist of one or multiple wearable sensors. These systems aim at providing different macro and micro activity detections like chewing, swallowing, eating episodes, and food types as well as estimations like food mass and eating duration. Researchers have emphasized on high detection accuracy, low estimation errors, un-intrusive nature, low cost and real life implementation while designing these systems, however a comprehensive automatic food monitoring system has yet not been developed. Moreover, according to the best of our knowledge, there is no comprehensive survey in this field that delineates the automatic food monitoring paradigm, covers a handful number of research studies, analyses these studies against food intake monitoring tasks using various parameters, enlists the limitations and sets up future directions. In this research work, we delineate the automatic food intake monitoring paradigm and present a survey of research studies. With special focus on studies with wearable sensors, we analyze these studies against food activity monitoring tasks. We provide brief comparison of these studies along with shortcomings based upon experimentation results conducted under these studies. We setup future directions at the end to facilitate the researchers working in this domain.
Nowadays many software development frameworks implement Behavior-Driven Development (BDD) as a mean of automating the test of interactive systems under construction. Automated testing helps to simulate users action on the User Interface and therefore check if the system behaves properly and in accordance to Scenarios that describe functional requirements. However, most of tools supporting BDD requires that tests should be written using low-level events and components that only exist when the system is already implemented. As a consequence of such low-level of abstraction, BDD tests can hardly be reused with diverse artifacts and wi
Computational notebooks allow data scientists to express their ideas through a combination of code and documentation. However, data scientists often pay attention only to the code, and neglect creating or updating their documentation during quick ite rations, which leads to challenges in sharing their notebooks with others and future selves. Inspired by human documentation practices from analyzing 80 highly-voted Kaggle notebooks, we design and implement Themisto, an automated documentation generation system to explore the Human-AI Collaboration opportunity in the code documentation scenario. Themisto facilitates the creation of different types of documentation via three approaches: a deep-learning-based approach to generate documentation for source code (fully automated), a query-based approach to retrieve the online API documentation for source code (fully automated), and a user prompt approach to motivate users to write more documentation (semi-automated). We evaluated Themisto in a within-subjects experiment with 24 data science practitioners, and found that automated documentation generation techniques reduced the time for writing documentation, reminded participants to document code they would have ignored, and improved participants satisfaction with their computational notebook.
Participatory design is a popular design technique that involves the end users in the early stages of the design process to obtain user-friendly gestural interfaces. Guessability studies followed by agreement analyses are often used to elicit and com prehend the preferences (or gestures/proposals) of the participants. Previous approaches to assess agreement, grouped the gestures into equivalence classes and ignored the integral properties that are shared between them. In this work, we represent the gestures using binary description vectors to allow them to be partially similar. In this context, we introduce a new metric referred to as soft agreement rate (SAR) to quantify the level of consensus between the participants. In addition, we performed computational experiments to study the behavior of our partial agreement formula and mathematically show that existing agreement metrics are a special case of our approach. Our methodology was evaluated through a gesture elicitation study conducted with a group of neurosurgeons. Nevertheless, our formulation can be applied to any other user-elicitation study. Results show that the level of agreement obtained by SAR metric is 2.64 times higher than the existing metrics. In addition to the mostly agreed gesture, SAR formulation also provides the mostly agreed descriptors which can potentially help the designers to come up with a final gesture set.
The increasing use of social media sites in countries like India has given rise to large volumes of code-mixed data. Sentiment analysis of this data can provide integral insights into peoples perspectives and opinions. Developing robust explainabilit y techniques which explain why models make their predictions becomes essential. In this paper, we propose an adequate methodology to integrate explainable approaches into code-mixed sentiment analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا