No Arabic abstract
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This network-free approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of partial network expansion into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim.
We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena.
Computer simulations have become an important tool across the biomedical sciences and beyond. For many important problems several different models or hypotheses exist and choosing which one best describes reality or observed data is not straightforward. We therefore require suitable statistical tools that allow us to choose rationally between different mechanistic models of e.g. signal transduction or gene regulation networks. This is particularly challenging in systems biology where only a small number of molecular species can be assayed at any given time and all measurements are subject to measurement uncertainty. Here we develop such a model selection framework based on approximate Bayesian computation and employing sequential Monte Carlo sampling. We show that our approach can be applied across a wide range of biological scenarios, and we illustrate its use on real data describing influenza dynamics and the JAK-STAT signalling pathway. Bayesian model selection strikes a balance between the complexity of the simulation models and their ability to describe observed data. The present approach enables us to employ the whole formal apparatus to any system that can be (efficiently) simulated, even when exact likelihoods are computationally intractable.
Rule-based modeling is a powerful way to model kinetic interactions in biochemical systems. Rules enable a precise encoding of biochemical interactions at the resolution of sites within molecules, but obtaining an integrated global view from sets of rules remains challenging. Current automated approaches to rule visualization fail to address the complexity of interactions between rules, limiting either the types of rules that are allowed or the set of interactions that can be visualized simultaneously. There is a need for scalable visualization approaches that present the information encoded in rules in an intuitive and useful manner at different levels of detail. We have developed new automated approaches for visualizing both individual rules and complete rule-based models. We find that a more compact representation of an individual rule promotes promotes understanding the model assumptions underlying each rule. For global visualization of rule interactions, we have developed a method to synthesize a network of interactions between sites and processes from a rule-based model and then use a combination of user-defined and automated approaches to compress this network into a readable form. The resulting diagrams enable modelers to identify signaling motifs such as cascades, feedback loops, and feed-forward loops in complex models, as we demonstrate using several large-scale models. These capabilities are implemented within the BioNetGen framework but the approach is equally applicable to rule-based models specified in other formats.
COnstraint-Based Reconstruction and Analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive software suite of interoperable COBRA methods. It has found widespread applications in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. Version 3.0 includes new methods for quality controlled reconstruction, modelling, topological analysis, strain and experimental design, network visualisation as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimisation solvers for multi-scale, multi-cellular and reaction kinetic modelling, respectively. This protocol can be adapted for the generation and analysis of a constraint-based model in a wide variety of molecular systems biology scenarios. This protocol is an update to the COBRA Toolbox 1.0 and 2.0. The COBRA Toolbox 3.0 provides an unparalleled depth of constraint-based reconstruction and analysis methods.
Comprehensive simulations of the entire biochemistry of cells have great potential to help physicians treat disease and help engineers design biological machines. But such simulations must model networks of millions of molecular species and reactions. The Stochastic Simulation Algorithm (SSA) is widely used for simulating biochemistry, especially systems with species populations small enough that discreteness and stochasticity play important roles. However, existing serial SSA methods are prohibitively slow for comprehensive networks, and existing parallel SSA methods, which use periodic synchronization, sacrifice accuracy. To enable fast, accurate, and scalable simulations of biochemistry, we present an exact parallel algorithm for SSA that partitions a biochemical network into many SSA processes that simulate in parallel. Our parallel SSA algorithm exactly coordinates the interactions among these SSA processes and the species state they share by structuring the algorithm as a parallel discrete event simulation (DES) application and using an optimistic parallel DES simulator to synchronize the interactions. We anticipate that our method will enable unprecedented biochemical simulations.