No Arabic abstract
The ability to consistently distinguish real protein structures from computationally generated model decoys is not yet a solved problem. One route to distinguish real protein structures from decoys is to delineate the important physical features that specify a real protein. For example, it has long been appreciated that the hydrophobic cores of proteins contribute significantly to their stability. As a dataset of decoys to compare with real protein structures, we studied submissions to the bi-annual CASP competition (specifically CASP11, 12, and 13), in which researchers attempt to predict the structure of a protein only knowing its amino acid sequence. Our analysis reveals that many of the submissions possess cores that do not recapitulate the features that define real proteins. In particular, the model structures appear more densely packed (because of energetically unfavorable atomic overlaps), contain too few residues in the core, and have improper distributions of hydrophobic residues throughout the structure. Based on these observations, we developed a deep learning method, which incorporates key physical features of protein cores, to predict how well a computational model recapitulates the real protein structure without knowledge of the structure of the target sequence. By identifying the important features of protein structure, our method is able to rank decoys from the CASP competitions equally well, if not better than, state-of-the-art methods that incorporate many additional features.
Experiments indicate that unbinding rates of proteins from DNA can depend on the concentration of proteins in nearby solution. Here we present a theory of multi-step replacement of DNA-bound proteins by solution-phase proteins. For four different kinetic scenarios we calculate the depen- dence of protein unbinding and replacement rates on solution protein concentration. We find (1) strong effects of progressive rezipping of the solution-phase protein onto DNA sites liberated by unzipping of the originally bound protein; (2) that a model in which solution-phase proteins bind non-specifically to DNA can describe experiments on exchanges between the non specific DNA- binding proteins Fis-Fis and Fis-HU; (3) that a binding specific model describes experiments on the exchange of CueR proteins on specific binding sites.
Protein molecules can be approximated by discrete polygonal chains of amino acids. Standard topological tools can be applied to the smoothening of the polygons to introduce a topological classification of proteins, for example, using the self-linking number of the corresponding framed curves. In this paper we add new details to the standard classification. Known definitions of the self-linking number apply to non-singular framings: for example, the Frenet framing cannot be used if the curve has inflection points. Meanwhile in the discrete proteins the special points are naturally resolved. Consequently, a separate integer topological characteristics can be introduced, which takes into account the intrinsic features of the special points. For large number of proteins we compute integer topological indices associated with the singularities of the Frenet framing. We show how a version of the Calugareanus theorem is satisfied for the associated self-linking number of a discrete curve. Since the singularities of the Frenet framing correspond to the structural motifs of proteins, we propose topological indices as a technical tool for the description of the folding dynamics of proteins.
Exploring and understanding the protein-folding problem has been a long-standing challenge in molecular biology. Here, using molecular dynamics simulation, we reveal how parallel distributed adjacent planar peptide groups of unfolded proteins fold reproducibly following explicit physical folding codes in aqueous environments due to electrostatic attractions. Superfast folding of protein is found to be powered by the contribution of the formation of hydrogen bonds. Temperature-induced torsional waves propagating along unfolded proteins break the parallel distributed state of specific amino acids, inferred as the beginning of folding. Electric charge and rotational resistance differences among neighboring side-chains are used to decipher the physical folding codes by means of which precise secondary structures develop. We present a powerful method of decoding amino acid sequences to predict native structures of proteins. The method is verified by comparing the results available from experiments in the literature.
We perform theoretical studies of stretching of 20 proteins with knots within a coarse grained model. The knots ends are found to jump to well defined sequential locations that are associated with sharp turns whereas in homopolymers they diffuse around and eventually slide off. The waiting times of the jumps are increasingly stochastic as the temperature is raised. Larger knots do not return to their native locations when a protein is released after stretching.
Computational prediction of membrane protein (MP) structures is very challenging partially due to lack of sufficient solved structures for homology modeling. Recently direct evolutionary coupling analysis (DCA) sheds some light on protein contact prediction and accordingly, contact-assisted folding, but DCA is effective only on some very large-sized families since it uses information only in a single protein family. This paper presents a deep transfer learning method that can significantly improve MP contact prediction by learning contact patterns and complex sequence-contact relationship from thousands of non-membrane proteins (non-MPs). Tested on 510 non-redundant MPs, our deep model (learned from only non-MPs) has top L/10 long-range contact prediction accuracy 0.69, better than our deep model trained by only MPs (0.63) and much better than a representative DCA method CCMpred (0.47) and the CASP11 winner MetaPSICOV (0.55). The accuracy of our deep model can be further improved to 0.72 when trained by a mix of non-MPs and MPs. When only contacts in transmembrane regions are evaluated, our method has top L/10 long-range accuracy 0.62, 0.57, and 0.53 when trained by a mix of non-MPs and MPs, by non-MPs only, and by MPs only, respectively, still much better than MetaPSICOV (0.45) and CCMpred (0.40). All these results suggest that sequence-structure relationship learned by our deep model from non-MPs generalizes well to MP contact prediction. Improved contact prediction also leads to better contact-assisted folding. Using only top predicted contacts as restraints, our deep learning method can fold 160 and 200 of 510 MPs with TMscore>0.6 when trained by non-MPs only and by a mix of non-MPs and MPs, respectively, while CCMpred and MetaPSICOV can do so for only 56 and 77 MPs, respectively. Our contact-assisted folding also greatly outperforms homology modeling.