ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards Scalable Modeling of Biology in Event-B

46   0   0.0 ( 0 )
 نشر من قبل Muhammad Usman Sanwal
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Biology offers many examples of large-scale, complex, concurrent systems: many processes take place in parallel, compete on resources and influence each others behavior. The scalable modeling of biological systems continues to be a very active field of research. In this paper we introduce a new approach based on Event-B, a state-based formal method with refinement as its central ingredient, allowing us to check for model consistency step-by-step in an automated way. Our approach based on functions leads to an elegant and concise modeling method. We demonstrate this approach by constructing what is, to our knowledge, the largest ever built Event-B model, describing the ErbB signaling pathway, a key evolutionary pathway with a significant role in development and in many types of cancer. The Event-B model for the ErbB pathway describes 1320 molecular reactions through 242 events.

قيم البحث

اقرأ أيضاً

We present a new experimental-computational technology of inferring network models that predict the response of cells to perturbations and that may be useful in the design of combinatorial therapy against cancer. The experiments are systematic series of perturbations of cancer cell lines by targeted drugs, singly or in combination. The response to perturbation is measured in terms of levels of proteins and phospho-proteins and of cellular phenotype such as viability. Computational network models are derived de novo, i.e., without prior knowledge of signaling pathways, and are based on simple non-linear differential equations. The prohibitively large solution space of all possible network models is explored efficiently using a probabilistic algorithm, belief propagation, which is three orders of magnitude more efficient than Monte Carlo methods. Explicit executable models are derived for a set of perturbation experiments in Skmel-133 melanoma cell lines, which are resistant to the therapeutically important inhibition of Raf kinase. The resulting network models reproduce and extend known pathway biology. They can be applied to discover new molecular interactions and to predict the effect of novel drug perturbations, one of which is verified experimentally. The technology is suitable for application to larger systems in diverse areas of molecular biology.
Randomness is an unavoidable feature of the intracellular environment due to chemical reactants being present in low copy number. That phenomenon, predicted by Delbruck long ago cite{delbruck40}, has been detected in both prokaryotic cite{elowitz02,c ai06} and eukaryotic cite{blake03} cells after the development of the fluorescence techniques. On the other hand, developing organisms, e.g. {em D. melanogaster}, exhibit strikingly precise spatio-temporal patterns of protein/mRNA concentrations cite{gregor07b,manu09a,manu09b,boettiger09}. Those two characteristics of living organisms are in apparent contradiction: the precise patterns of protein concentrations are the result of multiple mutually interacting random chemical reactions. The main question is to establish biochemical mechanisms for coupling random reactions so that canalization, or fluctuations reduction instead of amplification, takes place. Here we explore a model for coupling two stochastic processes where the noise of the combined process can be smaller than that of the isolated ones. Such a canalization occurs if, and only if, there is negative covariance between the random variables of the model. Our results are obtained in the framework of a master equation for a negatively self-regulated -- or externally regulated -- binary gene and show that the precise control due to negative self regulation cite{becskei00} is because it may generate negative covariance. Our results suggest that negative covariance, in the coupling of random chemical reactions, is a theoretical mechanism underlying the precision of developmental processes.
Innovation in synthetic biology often still depends on large-scale experimental trial-and-error, domain expertise, and ingenuity. The application of rational design engineering methods promise to make this more efficient, faster, cheaper and safer. B ut this requires mathematical models of cellular systems. And for these models we then have to determine if they can meet our intended target behaviour. Here we develop two complementary approaches that allow us to determine whether a given molecular circuit, represented by a mathematical model, is capable of fulfilling our design objectives. We discuss algebraic methods that are capable of identifying general principles guaranteeing desired behaviour; and we provide an overview over Bayesian design approaches that allow us to choose from a set of models, that model which has the highest probability of fulfilling our design objectives. We discuss their uses in the context of biochemical adaptation, and then consider how robustness can and should affect our design approach.
Synthetic biology aims at designing modular genetic circuits that can be assembled according to the desired function. When embedded in a cell, a circuit module becomes a small subnetwork within a larger environmental network, and its dynamics is ther efore affected by potentially unknown interactions with the environment. It is well-known that the presence of the environment not only causes extrinsic noise but also memory effects, which means that the dynamics of the subnetwork is affected by its past states via a memory function that is characteristic of the environment. We study several generic scenarios for the coupling between a small module and a larger environment, with the environment consisting of a chain of mono-molecular reactions. By mapping the dynamics of this coupled system onto random walks, we are able to give exact analytical expressions for the arising memory functions. Hence, our results give insights into the possible types of memory functions and thereby help to better predict subnetwork dynamics.
Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. An application named Tinker Cell has been created in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various C and Python programs that are hosted by TinkerCell via an extensive C and Python API. TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. Because TinkerCell associates parameters and equations in a model with their respective part, parts can be loaded from databases along with their parameters and rate equations. The modular network design can be used to exchange modules as well as test the concept of modularity in biological systems. The flexible modeling framework along with the C and Python API allows TinkerCell to serve as a host to numerous third-party algorithms. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at www.tinkercell.com.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا