No Arabic abstract
Data on the number of Open Reading Frames (ORFs) coded by genomes from the 3 domains of Life show some notable general features including essential differences between the Prokaryotes and Eukaryotes, with the number of ORFs growing linearly with total genome size for the former, but only logarithmically for the latter. Assuming that the (protein) coding and non-coding fractions of the genome must have different dynamics and that the non-coding fraction must be controlled by a variety of (unspecified) probability distribution functions, we are able to predict that the number of ORFs for Eukaryotes follows a Benford distribution and has a specific logarithmic form. Using the data for 1000+ genomes available to us in early 2010, we find excellent fits to the data over several orders of magnitude, in the linear regime for the Prokaryote data, and the full non-linear form for the Eukaryote data. In their region of overlap the salient features are statistically congruent, which allows us to: interpret the difference between Prokaryotes and Eukaryotes as the manifestation of the increased demand in the biological functions required for the larger Eukaryotes, estimate some minimal genome sizes, and predict a maximal Prokaryote genome size on the order of 8-12 megabasepairs. These results naturally allow a mathematical interpretation in terms of maximal entropy and, therefore, most efficient information transmission.
We report a droplet microfluidic method to target and sort individual cells directly from complex microbiome samples, and to prepare these cells for bulk whole genome sequencing without cultivation. We characterize this approach by recovering bacteria spiked into human stool samples at a ratio as low as 1:250 and by successfully enriching endogenous Bacteroides vulgatus to the level required for de-novo assembly of high-quality genomes. While microbiome strains are increasingly demanded for biomedical applications, the vast majority of species and strains are uncultivated and without reference genomes. We address this shortcoming by encapsulating complex microbiome samples directly into microfluidic droplets and amplify a target-specific genomic fragment using a custom molecular TaqMan probe. We separate those positive droplets by droplet sorting, selectively enriching single target strain cells. Finally, we present a protocol to purify the genomic DNA while specifically removing amplicons and cell debris for high-quality genome sequencing.
Engineering the entire genome of an organism enables large-scale changes in organization, function, and external interactions, with significant implications for industry, medicine, and the environment. Improvements to DNA synthesis and organism engineering are already enabling substantial changes to organisms with megabase genomes, such as Escherichia coli and Saccharomyces cerevisiae. Simultaneously, recent advances in genome-scale modeling are increasingly informing the design of metabolic networks. However, major challenges remain for integrating these and other relevant technologies into workflows that can scale to the engineering of gigabase genomes. In particular, we find that a major under-recognized challenge is coordinating the flow of models, designs, constructs, and measurements across the large teams and complex technological systems that will likely be required for gigabase genome engineering. We recommend that the community address these challenges by 1) adopting and extending existing standards and technologies for representing and exchanging information at the gigabase genomic scale, 2) developing new technologies to address major open questions around data curation and quality control, 3) conducting fundamental research on the integration of modeling and design at the genomic scale, and 4) developing new legal and contractual infrastructure to better enable collaboration across multiple institutions.
Being able to store and transmit human genome sequences is an important part in genomic research and industrial applications. The complete human genome has 3.1 billion base pairs (haploid), and storing the entire genome naively takes about 3 GB, which is infeasible for large scale usage. However, human genomes are highly redundant. Any given individuals genome would differ from another individuals genome by less than 1%. There are tools like DNAZip, which express a given genome sequence by only noting down the differences between the given sequence and a reference genome sequence. This allows losslessly compressing the given genome to ~ 4 MB in size. In this work, we demonstrate additional improvements on top of the DNAZip library, where we show an additional ~ 11% compression on top of DNAZips already impressive results. This would allow further savings in disk space and network costs for transmitting human genome sequences.
Recent genetic studies and whole-genome sequencing projects have greatly improved our understanding of human variation and clinically actionable genetic information. Smaller ethnic populations, however, remain underrepresented in both individual and large-scale sequencing efforts and hence present an opportunity to discover new variants of biomedical and demographic significance. This report describes the sequencing and analysis of a genome obtained from an individual of Serbian origin, introducing tens of thousands of previously unknown variants to the currently available pool. Ancestry analysis places this individual in close proximity of the Central and Eastern European populations; i.e., closest to Croatian, Bulgarian and Hungarian individuals and, in terms of other Europeans, furthest from Ashkenazi Jewish, Spanish, Sicilian, and Baltic individuals. Our analysis confirmed gene flow between Neanderthal and ancestral pan-European populations, with similar contributions to the Serbian genome as those observed in other European groups. Finally, to assess the burden of potentially disease-causing/clinically relevant variation in the sequenced genome, we utilized manually curated genotype-phenotype association databases and variant-effect predictors. We identified several variants that have previously been associated with severe early-onset disease that is not evident in the proband, as well as variants that could yet prove to be clinically relevant to the proband over the next decades. The presence of numerous private and low-frequency variants along with the observed and predicted disease-causing mutations in this genome exemplify some of the global challenges of genome interpretation, especially in the context of understudied ethnic groups.
The problem of the directionality of genome evolution is studied from the information-theoretic view. We propose that the function-coding information quantity of a genome always grows in the course of evolution through sequence duplication, expansion of code, and gene transfer between genomes. The function-coding information quantity of a genome consists of two parts, p-coding information quantity which encodes functional protein and n-coding information quantity which encodes other functional elements except amino acid sequence. The relation of the proposed law to the thermodynamic laws is indicated. The evolutionary trends of DNA sequences revealed by bioinformatics are investigated which afford further evidences on the evolutionary law. It is argued that the directionality of genome evolution comes from species competition adaptive to environment. An expression on the evolutionary rate of genome is proposed that the rate is a function of Darwin temperature (describing species competition) and fitness slope (describing adaptive landscape). Finally, the problem of directly experimental test on the evolutionary directionality is discussed briefly.