ترغب بنشر مسار تعليمي؟ اضغط هنا

Comparison of pharmacist evaluation of medication orders with predictions of a machine learning model

81   0   0.0 ( 0 )
 نشر من قبل Maxime Thibault
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The objective of this work was to assess the clinical performance of an unsupervised machine learning model aimed at identifying unusual medication orders and pharmacological profiles. We conducted a prospective study between April 2020 and August 2020 where 25 clinical pharmacists dichotomously (typical or atypical) rated 12,471 medication orders and 1,356 pharmacological profiles. Based on AUPR, performance was poor for orders, but satisfactory for profiles. Pharmacists considered the model a useful screening tool.


قيم البحث

اقرأ أيضاً

Medication errors continue to be the leading cause of avoidable patient harm in hospitals. This paper sets out a framework to assure medication safety that combines machine learning and safety engineering methods. It uses safety analysis to proactive ly identify potential causes of medication error, based on expert opinion. As healthcare is now data rich, it is possible to augment safety analysis with machine learning to discover actual causes of medication error from the data, and to identify where they deviate from what was predicted in the safety analysis. Combining these two views has the potential to enable the risk of medication errors to be managed proactively and dynamically. We apply the framework to a case study involving thoracic surgery, e.g. oesophagectomy, where errors in giving beta-blockers can be critical to control atrial fibrillation. This case study combines a HAZOP-based safety analysis method known as SHARD with Bayesian network structure learning and process mining to produce the analysis results, showing the potential of the framework for ensuring patient safety, and for transforming the way that safety is managed in complex healthcare environments.
With current and upcoming experiments such as WFIRST, Euclid and LSST, we can observe up to billions of galaxies. While such surveys cannot obtain spectra for all observed galaxies, they produce galaxy magnitudes in color filters. This data set behav es like a high-dimensional nonlinear surface, an excellent target for machine learning. In this work, we use a lightcone of semianalytic galaxies tuned to match CANDELS observations from Lu et al. (2014) to train a set of neural networks on a set of galaxy physical properties. We add realistic photometric noise and use trained neural networks to predict stellar masses and average star formation rates on real CANDELS galaxies, comparing our predictions to SED fitting results. On semianalytic galaxies, we are nearly competitive with template-fitting methods, with biases of $0.01$ dex for stellar mass, $0.09$ dex for star formation rate, and $0.04$ dex for metallicity. For the observed CANDELS data, our results are consistent with template fits on the same data at $0.15$ dex bias in $M_{rm star}$ and $0.61$ dex bias in star formation rate. Some of the bias is driven by SED-fitting limitations, rather than limitations on the training set, and some is intrinsic to the neural network method. Further errors are likely caused by differences in noise properties between the semianalytic catalogs and data. Our results show that galaxy physical properties can in principle be measured with neural networks at a competitive degree of accuracy and precision to template-fitting methods.
This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions. Starting with the foundations of decision making, we cover representation, optimization, and generalization as the constit uents of supervised learning. A chapter on datasets as benchmarks examines their histories and scientific bases. Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences. Throughout, the text discusses historical context and societal impact. We invite readers from all backgrounds; some experience with probability, calculus, and linear algebra suffices.
Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices. However, their emph{comparative training performanc e} under real-world resource-restricted Internet of Things (IoT) device settings, e.g., Raspberry Pi, remains barely studied, which, to our knowledge, have not yet been evaluated and compared, rendering inconvenient reference for practitioners. This work firstly provides empirical comparisons of FL and SL in real-world IoT settings regarding (i) learning performance with heterogeneous data distributions and (ii) on-device execution overhead. Our analyses in this work demonstrate that the learning performance of SL is better than FL under an imbalanced data distribution but worse than FL under an extreme non-IID data distribution. Recently, FL and SL are combined to form splitfed learning (SFL) to leverage each of their benefits (e.g., parallel training of FL and lightweight on-device computation requirement of SL). This work then considers FL, SL, and SFL, and mount them on Raspberry Pi devices to evaluate their performance, including training time, communication overhead, power consumption, and memory usage. Besides evaluations, we apply two optimizations. Firstly, we generalize SFL by carefully examining the possibility of a hybrid type of model training at the server-side. The generalized SFL merges sequential (dependent) and parallel (independent) processes of model training and is thus beneficial for a system with large-scaled IoT devices, specifically at the server-side operations. Secondly, we propose pragmatic techniques to substantially reduce the communication overhead by up to four times for the SL and (generalized) SFL.
Models based on neural networks and machine learning are seeing a rise in popularity in space physics. In particular, the forecasting of geomagnetic indices with neural network models is becoming a popular field of study. These models are evaluated w ith metrics such as the root-mean-square error (RMSE) and Pearson correlation coefficient. However, these classical metrics sometimes fail to capture crucial behavior. To show where the classical metrics are lacking, we trained a neural network, using a long short-term memory network, to make a forecast of the disturbance storm time index at origin time $t$ with a forecasting horizon of 1 up to 6 hours, trained on OMNIWeb data. Inspection of the models results with the correlation coefficient and RMSE indicated a performance comparable to the latest publications. However, visual inspection showed that the predictions made by the neural network were behaving similarly to the persistence model. In this work, a new method is proposed to measure whether two time series are shifted in time with respect to each other, such as the persistence model output versus the observation. The new measure, based on Dynamical Time Warping, is capable of identifying results made by the persistence model and shows promising results in confirming the visual observations of the neural networks output. Finally, different methodologies for training the neural network are explored in order to remove the persistence behavior from the results.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا