ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Insulin-Glucose Dynamics in the Wild

68   0   0.0 ( 0 )
 نشر من قبل Andrew Miller
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop a new model of insulin-glucose dynamics for forecasting blood glucose in type 1 diabetics. We augment an existing biomedical model by introducing time-varying dynamics driven by a machine learning sequence model. Our model maintains a physiologically plausible inductive bias and clinically interpretable parameters -- e.g., insulin sensitivity -- while inheriting the flexibility of modern pattern recognition algorithms. Critical to modeling success are the flexible, but structured representations of subject variability with a sequence model. In contrast, less constrained models like the LSTM fail to provide reliable or physiologically plausible forecasts. We conduct an extensive empirical study. We show that allowing biomedical model dynamics to vary in time improves forecasting at long time horizons, up to six hours, and produces forecasts consistent with the physiological effects of insulin and carbohydrates.



قيم البحث

اقرأ أيضاً

In this paper, we build a new, simple, and interpretable mathematical model to describe the human glucose-insulin system. Our ultimate goal is the robust control of the blood glucose (BG) level of individuals to a desired healthy range, by means of a djusting the amount of nutrition and/or external insulin appropriately. By constructing a simple yet flexible model class, with interpretable parameters, this general model can be specialized to work in different settings, such as type 2 diabetes mellitus (T2DM) and intensive care unit (ICU); different choices of appropriate model functions describing uptake of nutrition and removal of glucose differentiate between the models. In both cases, the available data is sparse and collected in clinical settings, major factors that have constrained our model choice to the simple form adopted. The model has the form of a linear stochastic differential equation (SDE) to describe the evolution of the BG level. The model includes a term quantifying glucose removal from the bloodstream through the regulation system of the human body, and another two terms representing the effect of nutrition and externally delivered insulin. The parameters entering the equation must be learned in a patient-specific fashion, leading to personalized models. We present numerical results on patient-specific parameter estimation and future BG level forecasting in T2DM and ICU settings. The resulting model leads to the prediction of the BG level as an expected value accompanied by a band around this value which accounts for uncertainties in the prediction. Such predictions, then, have the potential for use as part of control systems which are robust to model imperfections and noisy data. Finally, a comparison of the predictive capability of the model with two different models specifically built for T2DM and ICU contexts is also performed.
381 - Jean Feng 2020
Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, the FDA is looking to design policies that can autonomously approve modi fications to machine learning algorithms while maintaining or improving the safety and effectiveness of the deployed models. However, selecting a fixed approval strategy, a priori, can be difficult because its performance depends on the stationarity of the data and the quality of the proposed modifications. To this end, we investigate a learning-to-approve approach (L2A) that uses accumulating monitoring data to learn how to approve modifications. L2A defines a family of strategies that vary in their optimism---where more optimistic policies have faster approval rates---and searches over this family using an exponentially weighted average forecaster. To control the cumulative risk of the deployed model, we give L2A the option to abstain from making a prediction and incur some fixed abstention cost instead. We derive bounds on the average risk of the model deployed by L2A, assuming the distributional shifts are smooth. In simulation studies and empirical analyses, L2A tailors the level of optimism for each problem-setting: It learns to abstain when performance drops are common and approve beneficial modifications quickly when the distribution is stable.
The infrared spectroscopy and dynamics of -CO labels in wild type and mutant insulin monomer and dimer are characterized from molecular dynamics simulations using validated force fields. It is found that the spectroscopy of monomeric and dimeric form s in the region of the amide-I vibration differs for residues B24-B26 and D24-D26, which are involved in dimerization of the hormone. Also, the spectroscopic signatures change for mutations at position B24 from phenylalanine - which is conserved in many organisms and known to play a central role in insulin aggregation - to alanine or glycine. Using three different methods to determine the frequency trajectories - solving the nuclear Schrodinger equation on an effective 1-dimensional potential energy curve, instantaneous normal modes, and using parametrized frequency maps - lead to the same overall conclusions. The spectroscopic response of monomeric WT and mutant insulin differs from that of their respective dimers and the spectroscopy of the two monomers in the dimer is also not identical. For the WT and F24A and F24G monomers spectroscopic shifts are found to be $sim 20$ cm$^{-1}$ for residues (B24 to B26) located at the dimerization interface. Although the crystal structure of the dimer is that of a symmetric homodimer, dynamically the two monomers are not equivalent on the nanosecond time scale. Together with earlier work on the thermodynamic stability of the WT and the same mutants it is concluded that combining computational and experimental infrared spectroscopy provides a potentially powerful way to characterize the aggregation state and dimerization energy of modified insulins.
In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined. If the model is not able to provide sensible long-term prediction, the executed planner would exploit model flaws, which can yield catastrophic failures. This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration. To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference. We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions. Moreover, by planning in the latent space, the planners solution is ensured to be within regions where the model is valid. An exploration strategy can be devised by searching for unlikely trajectories under the model. Our method achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings.
We propose a new approach for constructing synthetic pseudo-panel data from cross-sectional data. The pseudo panel and the preferences it intends to describe is constructed at the individual level and is not affected by aggregation bias across cohort s. This is accomplished by creating a high-dimensional probabilistic model representation of the entire data set, which allows sampling from the probabilistic model in such a way that all of the intrinsic correlation properties of the original data are preserved. The key to this is the use of deep learning algorithms based on the Conditional Variational Autoencoder (CVAE) framework. From a modelling perspective, the concept of a model-based resampling creates a number of opportunities in that data can be organized and constructed to serve very specific needs of which the forming of heterogeneous pseudo panels represents one. The advantage, in that respect, is the ability to trade a serious aggregation bias (when aggregating into cohorts) for an unsystematic noise disturbance. Moreover, the approach makes it possible to explore high-dimensional sparse preference distributions and their linkage to individual specific characteristics, which is not possible if applying traditional pseudo-panel methods. We use the presented approach to reveal the dynamics of transport preferences for a fixed pseudo panel of individuals based on a large Danish cross-sectional data set covering the period from 2006 to 2016. The model is also utilized to classify individuals into slow and fast movers with respect to the speed at which their preferences change over time. It is found that the prototypical fast mover is a young woman who lives as a single in a large city whereas the typical slow mover is a middle-aged man with high income from a nuclear family who lives in a detached house outside a city.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا