No Arabic abstract
We discuss the problem of proteasomal degradation of proteins. Though proteasomes are important for all aspects of the cellular metabolism, some details of the physical mechanism of the process remain unknown. We introduce a stochastic model of the proteasomal degradation of proteins, which accounts for the protein translocation and the topology of the positioning of cleavage centers of a proteasome from first principles. For this model we develop the mathematical description based on a master-equation and techniques for reconstruction of the cleavage specificity inherent to proteins and the proteasomal translocation rates, which are a property of the proteasome specie, from mass spectroscopy data on digestion patterns. With these properties determined, one can quantitatively predict digestion patterns for new experimental set-ups. Additionally we design an experimental set-up for a synthetic polypeptide with a periodic sequence of amino acids, which enables especially reliable determination of translocation rates.
In cells and in vitro assays the number of motor proteins involved in biological transport processes is far from being unlimited. The cytoskeletal binding sites are in contact with the same finite reservoir of motors (either the cytosol or the flow chamber) and hence compete for recruiting the available motors, potentially depleting the reservoir and affecting cytoskeletal transport. In this work we provide a theoretical framework to study, analytically and numerically, how motor density profiles and crowding along cytoskeletal filaments depend on the competition of motors for their binding sites. We propose two models in which finite processive motor proteins actively advance along cytoskeletal filaments and are continuously exchanged with the motor pool. We first look at homogeneous reservoirs and then examine the effects of free motor diffusion in the surrounding medium. We consider as a reference situation recent in vitro experimental setups of kinesin-8 motors binding and moving along microtubule filaments in a flow chamber. We investigate how the crowding of linear motor proteins moving on a filament can be regulated by the balance between supply (concentration of motor proteins in the flow chamber) and demand (total number of polymerised tubulin heterodimers). We present analytical results for the density profiles of bound motors, the reservoir depletion, and propose novel phase diagrams that present the formation of jams of motor proteins on the filament as a function of two tuneable experimental parameters: the motor protein concentration and the concentration of tubulins polymerized into cytoskeletal filaments. Extensive numerical simulations corroborate the analytical results for parameters in the experimental range and also address the effects of diffusion of motor proteins in the reservoir.
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and {cred work dissipated by the system during wrong incorporations}. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Several independent observations have suggested that catastrophe transition in microtubules is not a first-order process, as is usually assumed. Recent {it in vitro} observations by Gardner et al.[ M. K. Gardner et al., Cell {bf147}, 1092 (2011)] showed that microtubule catastrophe takes place via multiple steps and the frequency increases with the age of the filament. Here, we investigate, via numerical simulations and mathematical calculations, some of the consequences of age dependence of catastrophe on the dynamics of microtubules as a function of the aging rate, for two different models of aging: exponential growth, but saturating asymptotically and purely linear growth. The boundary demarcating the steady state and non-steady state regimes in the dynamics is derived analytically in both cases. Numerical simulations, supported by analytical calculations in the linear model, show that aging leads to non-exponential length distributions in steady state. More importantly, oscillations ensue in microtubule length and velocity. The regularity of oscillations, as characterized by the negative dip in the autocorrelation function, is reduced by increasing the frequency of rescue events. Our study shows that age dependence of catastrophe could function as an intrinsic mechanism to generate oscillatory dynamics in a microtubule population, distinct from hitherto identified ones.
Long cell protrusions, which are effectively one-dimensional, are highly dynamic subcellular structures. Length of many such protrusions keep fluctuating about the mean value even in the the steady state. We develop here a stochastic model motivated by length fluctuations of a type of appendage of an eukaryotic cell called flagellum (also called cilium). Exploiting the techniques developed for the calculation of level-crossing statistics of random excursions of stochastic process, we have derived analytical expressions of passage times for hitting various thresholds, sojourn times of random excursions beyond the threshold and the extreme lengths attained during the lifetime of these model flagella. We identify different parameter regimes of this model flagellum that mimic those of the wildtype and mutants of a well known flagellated cell. By analysing our model in these different parameter regimes, we demonstrate how mutation can alter the level-crossing statistics even when the steady state length remains unaffected by the same mutation. Comparison of the theoretically predicted level crossing statistics, in addition to mean and variance of the length, in the steady state with the corresponding experimental data can be used in near future as stringent tests for the validity of the models of flagellar length control. The experimental data required for this purpose, though never reported till now, can be collected, in principle, using a method developed very recently for flagellar length fluctuations.
Identifying protein-protein interactions is crucial for a systems-level understanding of the cell. Recently, algorithms based on inverse statistical physics, e.g. Direct Coupling Analysis (DCA), have allowed to use evolutionarily related sequences to address two conceptually related inference tasks: finding pairs of interacting proteins, and identifying pairs of residues which form contacts between interacting proteins. Here we address two underlying questions: How are the performances of both inference tasks related? How does performance depend on dataset size and the quality? To this end, we formalize both tasks using Ising models defined over stochastic block models, with individual blocks representing single proteins, and inter-block couplings protein-protein interactions; controlled synthetic sequence data are generated by Monte-Carlo simulations. We show that DCA is able to address both inference tasks accurately when sufficiently large training sets are available, and that an iterative pairing algorithm (IPA) allows to make predictions even without a training set. Noise in the training data deteriorates performance. In both tasks we find a quadratic scaling relating dataset quality and size that is consistent with noise adding in square-root fashion and signal adding linearly when increasing the dataset. This implies that it is generally good to incorporate more data even if its quality is imperfect, thereby shedding light on the empirically observed performance of DCA applied to natural protein sequences.