ترغب بنشر مسار تعليمي؟ اضغط هنا

ricu: Rs Interface to Intensive Care Data

88   0   0.0 ( 0 )
 نشر من قبل Nicolas Bennett
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Providing computational infrastructure for handling diverse intensive care unit (ICU) datasets, the R package ricu enables writing dataset-agnostic analysis code, thereby facilitating multi-center training and validation of machine learning models. The package is designed with an emphasis on extensibility both to new datasets as well as clinical data concepts, and currently supports the loading of around 100 patient variables corresponding to a total of 319,402 ICU admissions from 4 data sources collected in Europe and the United States. By allowing for the addition of user-specified medical concepts and data sources the aim of ricu is to foster robust, data-based intensive care research, allowing the user to externally validate their method or conclusion with relative ease, and in turn facilitating reproducible and therefore transparent work in this field.

قيم البحث

اقرأ أيضاً

Waveform physiological data is important in the treatment of critically ill patients in the intensive care unit. Such recordings are susceptible to artefacts, which must be removed before the data can be re-used for alerting or reprocessed for other clinical or research purposes. Accurate removal of artefacts reduces bias and uncertainty in clinical assessment, as well as the false positive rate of intensive care unit alarms, and is therefore a key component in providing optimal clinical care. In this work, we present DeepClean; a prototype self-supervised artefact detection system using a convolutional variational autoencoder deep neural network that avoids costly and painstaking manual annotation, requiring only easily-obtained good data for training. For a test case with invasive arterial blood pressure, we demonstrate that our algorithm can detect the presence of an artefact within a 10-second sample of data with sensitivity and specificity around 90%. Furthermore, DeepClean was able to identify regions of artefact within such samples with high accuracy and we show that it significantly outperforms a baseline principle component analysis approach in both signal reconstruction and artefact detection. DeepClean learns a generative model and therefore may also be used for imputation of missing data.
In this paper, we describe a Graphical User Interface (GUI) designed to manage large quantities of image data of a biological system. After setting the design requirements for the system, we developed an ecology quantification GUI that assists biolog ists in analysing data. We focus on the main features of the interface and we present the results and an evaluation of the system. Finally, we provide some directions for some future work.
With the development of the Internet of Things(IoT) and Artificial Intelligence(AI) technologies, human activity recognition has enabled various applications, such as smart homes and assisted living. In this paper, we target a new healthcare applicat ion of human activity recognition, early mobility recognition for Intensive Care Unit(ICU) patients. Early mobility is essential for ICU patients who suffer from long-time immobilization. Our system includes accelerometer-based data collection from ICU patients and an AI model to recognize patients early mobility. To improve the model accuracy and stability, we identify features that are insensitive to sensor orientations and propose a segment voting process that leverages a majority voting strategy to recognize each segments activity. Our results show that our system improves model accuracy from 77.78% to 81.86% and reduces the model instability (standard deviation) from 16.69% to 6.92%, compared to the same AI model without our feature engineering and segment voting process.
We consider the problem of evaluating the quality of startup companies. This can be quite challenging due to the rarity of successful startup companies and the complexity of factors which impact such success. In this work we collect data on tens of t housands of startup companies, their performance, the backgrounds of their founders, and their investors. We develop a novel model for the success of a startup company based on the first passage time of a Brownian motion. The drift and diffusion of the Brownian motion associated with a startup company are a function of features based its sector, founders, and initial investors. All features are calculated using our massive dataset. Using a Bayesian approach, we are able to obtain quantitative insights about the features of successful startup companies from our model. To test the performance of our model, we use it to build a portfolio of companies where the goal is to maximize the probability of having at least one company achieve an exit (IPO or acquisition), which we refer to as winning. This $textit{picking winners}$ framework is very general and can be used to model many problems with low probability, high reward outcomes, such as pharmaceutical companies choosing drugs to develop or studios selecting movies to produce. We frame the construction of a picking winners portfolio as a combinatorial optimization problem and show that a greedy solution has strong performance guarantees. We apply the picking winners framework to the problem of choosing a portfolio of startup companies. Using our model for the exit probabilities, we are able to construct out of sample portfolios which achieve exit rates as high as 60%, which is nearly double that of top venture capital firms.
Due to recent technological advances, large brain imaging data sets can now be collected. Such data are highly complex so extraction of meaningful information from them remains challenging. Thus, there is an urgent need for statistical procedures tha t are computationally scalable and can provide accurate estimates that capture the neuronal structures and their functionalities. We propose a fast method for estimating the fiber orientation distribution(FOD) based on diffusion MRI data. This method models the observed dMRI signal at any voxel as a convolved and noisy version of the underlying FOD, and utilizes the spherical harmonics basis for representing the FOD, where the spherical harmonic coefficients are adaptively and nonlinearly shrunk by using a James-Stein type estimator. To further improve the estimation accuracy by enhancing the localized peaks of the FOD, as a second step a super-resolution sharpening process is then applied. The resulting estimated FODs can be fed to a fiber tracking algorithm to reconstruct the white matter fiber tracts. We illustrate the overall methodology using both synthetic data and data from the Human Connectome Project.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا