Do you want to publish a course? Click here

Effects of Voice-Based Synthetic Assistant on Performance of Emergency Care Provider in Training

55   0   0.0 ( 0 )
 Added by Praveen Damacharla
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

As part of a perennial project, our team is actively engaged in developing new synthetic assistant (SA) technologies to assist in training combat medics and medical first responders. It is critical that medical first responders are well trained to deal with emergencies more effectively. This would require real-time monitoring and feedback for each trainee. Therefore, we introduced a voice-based SA to augment the training process of medical first responders and enhance their performance in the field. The potential benefits of SAs include a reduction in training costs and enhanced monitoring mechanisms. Despite the increased usage of voice-based personal assistants (PAs) in day-to-day life, the associated effects are commonly neglected for a study of human factors. Therefore, this paper focuses on performance analysis of the developed voice-based SA in emergency care provider training for a selected emergency treatment scenario. The research discussed in this paper follows design science in developing proposed technology; at length, we discussed architecture and development and presented working results of voice-based SA. The empirical testing was conducted on two groups as user studies using statistical analysis tools, one trained with conventional methods and the other with the help of SA. The statistical results demonstrated the amplification in training efficacy and performance of medical responders powered by SA. Furthermore, the paper also discusses the accuracy and time of task execution (t) and concludes with the guidelines for resolving the identified problems.



rate research

Read More

66 - Yuanbang Li 2021
With the widespread use of mobile phones, users can share their location and activity anytime, anywhere, as a form of check in data. These data reflect user features. Long term stable, and a set of user shared features can be abstracted as user roles. The role is closely related to the users social background, occupation, and living habits. This study provides four main contributions. Firstly, user feature models from different views for each user are constructed from the analysis of check in data. Secondly, K Means algorithm is used to discover user roles from user features. Thirdly, a reinforcement learning algorithm is proposed to strengthen the clustering effect of user roles and improve the stability of the clustering result. Finally, experiments are used to verify the validity of the method, the results of which show the effectiveness of the method.
Trauma mortality results from a multitude of non-linear dependent risk factors including patient demographics, injury characteristics, medical care provided, and characteristics of medical facilities; yet traditional approach attempted to capture these relationships using rigid regression models. We hypothesized that a transfer learning based machine learning algorithm could deeply understand a trauma patients condition and accurately identify individuals at high risk for mortality without relying on restrictive regression model criteria. Anonymous patient visit data were obtained from years 2007-2014 of the National Trauma Data Bank. Patients with incomplete vitals, unknown outcome, or missing demographics data were excluded. All patient visits occurred in U.S. hospitals, and of the 2,007,485 encounters that were retrospectively examined, 8,198 resulted in mortality (0.4%). The machine intelligence model was evaluated on its sensitivity, specificity, positive and negative predictive value, and Matthews Correlation Coefficient. Our model achieved similar performance in age-specific comparison models and generalized well when applied to all ages simultaneously. While testing for confounding factors, we discovered that excluding fall-related injuries boosted performance for adult trauma patients; however, it reduced performance for children. The machine intelligence model described here demonstrates similar performance to contemporary machine intelligence models without requiring restrictive regression model criteria or extensive medical expertise.
Dysfluencies and variations in speech pronunciation can severely degrade speech recognition performance, and for many individuals with moderate-to-severe speech disorders, voice operated systems do not work. Current speech recognition systems are trained primarily with data from fluent speakers and as a consequence do not generalize well to speech with dysfluencies such as sound or word repetitions, sound prolongations, or audible blocks. The focus of this work is on quantitative analysis of a consumer speech recognition system on individuals who stutter and production-oriented approaches for improving performance for common voice assistant tasks (i.e., what is the weather?). At baseline, this system introduces a significant number of insertion and substitution errors resulting in intended speech Word Error Rates (isWER) that are 13.64% worse (absolute) for individuals with fluency disorders. We show that by simply tuning the decoding parameters in an existing hybrid speech recognition system one can improve isWER by 24% (relative) for individuals with fluency disorders. Tuning these parameters translates to 3.6% better domain recognition and 1.7% better intent recognition relative to the default setup for the 18 study participants across all stuttering severities.
The needs for precisely estimating a students academic performance have been emphasized with an increasing amount of attention paid to Intelligent Tutoring System (ITS). However, since labels for academic performance, such as test scores, are collected from outside of ITS, obtaining the labels is costly, leading to label-scarcity problem which brings challenge in taking machine learning approaches for academic performance prediction. To this end, inspired by the recent advancement of pre-training method in natural language processing community, we propose DPA, a transfer learning framework with Discriminative Pre-training tasks for Academic performance prediction. DPA pre-trains two models, a generator and a discriminator, and fine-tunes the discriminator on academic performance prediction. In DPAs pre-training phase, a sequence of interactions where some tokens are masked is provided to the generator which is trained to reconstruct the original sequence. Then, the discriminator takes an interaction sequence where the masked tokens are replaced by the generators outputs, and is trained to predict the originalities of all tokens in the sequence. Compared to the previous state-of-the-art generative pre-training method, DPA is more sample efficient, leading to fast convergence to lower academic performance prediction error. We conduct extensive experimental studies on a real-world dataset obtained from a multi-platform ITS application and show that DPA outperforms the previous state-of-the-art generative pre-training method with a reduction of 4.05% in mean absolute error and more robust to increased label-scarcity.
Voice assistants have become quite popular lately while in parallel they are an important part of smarthome systems. Through their voice assistants, users can perform various tasks, control other devices and enjoy third party services. The assistants are part of a wider ecosystem. Their function relies on the users voice commands, received through original voice assistant devices or companion applications for smartphones and tablets, which are then sent through the internet to the vendor cloud services and are translated into commands. These commands are then transferred to other applications and services. As this huge volume of data, and mainly personal data of the user, moves around the voice assistant ecosystem, there are several places where personal data is temporarily or permanently stored and thus it is easy for a cyber attacker to tamper with this data, bringing forward major privacy issues. In our work we present the types and location of such personal data artifacts within the ecosystems of three popular voice assistants, after having set up our own testbed, and using IoT forensic procedures. Our privacy evaluation includes the companion apps of the assistants, as we also compare the permissions they require before their installation on an Android device.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا