ﻻ يوجد ملخص باللغة العربية
In biopharmaceutical manufacturing, fermentation processes play a critical role on productivity and profit. A fermentation process uses living cells with complex biological mechanisms, and this leads to high variability in the process outputs. By building on the biological mechanisms of protein and impurity growth, we introduce a stochastic model to characterize the accumulation of the protein and impurity levels in the fermentation process. However, a common challenge in industry is the availability of only very limited amount of data especially in the development and early stage of production. This adds an additional layer of uncertainty, referred to as model risk, due to the difficulty of estimating the model parameters with limited data. In this paper, we study the harvesting decision for a fermentation process under model risk. In particular, we adopt a Bayesian approach to update the unknown parameters of the growth-rate distributions, and use the resulting posterior distributions to characterize the impact of model risk on fermentation output variability. The harvesting problem is formulated as a Markov decision process model with knowledge states that summarize the posterior distributions and hence incorporate the model risk in decision-making. The resulting model is solved by using a reinforcement learning algorithm based on Bayesian sparse sampling. We provide analytical results on the structure of the optimal policy and its objective function, and explicitly study the impact of model risk on harvesting decisions. Our case studies at MSD Animal Health demonstrate that the proposed model and solution approach improve the harvesting decisions in real life by achieving substantially higher average output from a fermentation batch along with lower batch-to-batch variability.
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. H
Modern neural networks have proven to be powerful function approximators, providing state-of-the-art performance in a multitude of applications. They however fall short in their ability to quantify confidence in their predictions - this is crucial in
Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on
We consider games of chance played by someone with external capital that cannot be applied to the game and determine how this affects risk-adjusted optimal betting. Specifically, we focus on Kelly optimization as a metric, optimizing the expected log
We use decision theory to confront uncertainty that is sufficiently broad to incorporate models as approximations. We presume the existence of a featured collection of what we call structured models that have explicit substantive motivations. The dec