No Arabic abstract
Massively parallel sequencing techniques have revolutionized biological and medical sciences by providing unprecedented insight into the genomes of humans, animals, and microbes. Modern sequencing platforms generate enormous amounts of genomic data in the form of nucleotide sequences or reads. Aligning reads onto reference genomes enables the identification of individual-specific genetic variants and is an essential step of the majority of genomic analysis pipelines. Aligned reads are essential for answering important biological questions, such as detecting mutations driving various human diseases and complex traits as well as identifying species present in metagenomic samples. The read alignment problem is extremely challenging due to the large size of analyzed datasets and numerous technological limitations of sequencing platforms, and researchers have developed novel bioinformatics algorithms to tackle these difficulties. Importantly, computational algorithms have evolved and diversified in accordance with technological advances, leading to todays diverse array of bioinformatics tools. Our review provides a survey of algorithmic foundations and methodologies across 107 alignment methods published between 1988 and 2020, for both short and long reads. We provide rigorous experimental evaluation of 11 read aligners to demonstrate the effect of these underlying algorithms on speed and efficiency of read aligners. We separately discuss how longer read lengths produce unique advantages and limitations to read alignment techniques. We also discuss how general alignment algorithms have been tailored to the specific needs of various domains in biology, including whole transcriptome, adaptive immune repertoire, and human microbiome studies.
To rectify the problems of electron clouds observed in RHIC and unacceptable ohmic heating for superconducting magnets that can limit future machine upgrades, we started developing a robotic plasma deposition technique for $in-situ$ coating of the RHIC 316LN stainless steel cold bore tubes based on staged magnetrons mounted on a mobile mole for deposition of Cu followed by amorphous carbon (a-C) coating. The Cu coating reduces wall resistivity, while a-C has low SEY that suppresses electron cloud formation. Recent RF resistivity computations indicate that 10 {mu}m of Cu coating thickness is needed. But, Cu coatings thicker than 2 {mu}m can have grain structures that might have lower SEY like gold black. A 15-cm Cu cathode magnetron was designed and fabricated, after which, 30 cm long samples of RHIC cold bore tubes were coated with various OFHC copper thicknesses; room temperature RF resistivity measured. Rectangular stainless steel and SS discs were Cu coated. SEY of rectangular samples were measured at room; and, SEY of a disc sample was measured at cryogenic temperatures.
The genetic structure of human populations is extraordinarily complex and of fundamental importance to studies of anthropology, evolution, and medicine. As increasingly many individuals are of mixed origin, there is an unmet need for tools that can infer multiple origins. Misclassification of such individuals can lead to incorrect and costly misinterpretations of genomic data, primarily in disease studies and drug trials. We present an advanced tool to infer ancestry that can identify the biogeographic origins of highly mixed individuals. reAdmix is an online tool available at http://chcb.saban-chla.usc.edu/reAdmix/.
RNA-seq has rapidly become the de facto technique to measure gene expression. However, the time required for analysis has not kept up with the pace of data generation. Here we introduce Sailfish, a novel computational method for quantifying the abundance of previously annotated RNA isoforms from RNA-seq data. Sailfish entirely avoids mapping reads, which is a time-consuming step in all current methods. Sailfish provides quantification estimates much faster than existing approaches (typically 20-times faster) without loss of accuracy.
We present a combined mean-field and simulation approach to different models describing the dynamics of classes formed by elements that can appear, disappear or copy themselves. These models, related to a paradigm duplication-innovation model known as Chinese Restaurant Process, are devised to reproduce the scaling behavior observed in the genome-wide repertoire of protein domains of all known species. In view of these data, we discuss the qualitative and quantitative differences of the alternative model formulations, focusing in particular on the roles of element loss and of the specificity of empirical domain classes.
The enormous power consumption of Bitcoin has led to undifferentiated discussions in science and practice about the sustainability of blockchain and distributed ledger technology in general. However, blockchain technology is far from homogeneous - not only with regard to its applications, which now go far beyond cryptocurrencies and have reached businesses and the public sector, but also with regard to its technical characteristics and, in particular, its power consumption. This paper summarizes the status quo of the power consumption of various implementations of blockchain technology, with special emphasis on the recent Bitcoin Halving and so-called zk-rollups. We argue that although Bitcoin and other proof-of-work blockchains do indeed consume a lot of power, alternative blockchain solutions with significantly lower power consumption are already available today, and new promising concepts are being tested that could further reduce in particular the power consumption of large blockchain networks in the near future. From this we conclude that although the criticism of Bitcoins power consumption is legitimate, it should not be used to derive an energy problem of blockchain technology in general. In many cases in which processes can be digitised or improved with the help of more energy-efficient blockchain variants, one can even expect net energy savings.