ترغب بنشر مسار تعليمي؟ اضغط هنا

Cell differentiation: what have we learned in 50 years?

317   0   0.0 ( 0 )
 نشر من قبل Stuart Newman
 تاريخ النشر 2019
  مجال البحث علم الأحياء
والبحث باللغة English
 تأليف Stuart A. Newman




اسأل ChatGPT حول البحث

I revisit two theories of cell differentiation in multicellular organisms published a half-century ago, Stuart Kauffmans global gene regulatory dynamics (GGRD) model and Roy Brittens and Eric Davidsons modular gene regulatory network (MGRN) model, in light of newer knowledge of mechanisms of gene regulation in the metazoans (animals). The two models continue to inform hypotheses and computational studies of differentiation of lineage-adjacent cell types. However, their shared notion (based on bacterial regulatory systems) of gene switches and networks built from them, have constrained progress in understanding the dynamics and evolution of differentiation. Recent work has described unique write-read-rewrite chromatin-based expression encoding in eukaryotes, as well metazoan-specific processes of gene activation and silencing in condensed-phase, enhancer-recruiting regulatory hubs, employing disordered proteins, including transcription factors, with context-dependent identities. These findings suggest an evolutionary scenario in which the origination of differentiation in animals, rather than depending exclusively on adaptive natural selection, emerged as a consequence of a type of multicellularity in which the novel metazoan gene regulatory apparatus was readily mobilized to amplify and exaggerate inherent cell functions of unicellular ancestors. The plausibility of this hypothesis is illustrated by the evolution of the developmental role of Grainyhead-like in the formation of epithelium.



قيم البحث

اقرأ أيضاً

Anonymous peer review is used by the great majority of computer science conferences. OpenReview is such a platform that aims to promote openness in peer review process. The paper, (meta) reviews, rebuttals, and final decisions are all released to pub lic. We collect 5,527 submissions and their 16,853 reviews from the OpenReview platform. We also collect these submissions citation data from Google Scholar and their non-peer-review
89 - G. Delgado-Inglada 2016
Nearly 50 years ago, in the proceedings of the first IAU symposium on planetary nebulae, Lawrence H. Aller and Stanley J. Czyzak said that the problem of determination of the chemical compositions of planetary and other gaseous nebulae constitutes on e of the most exasperating problems in astrophysics. Although the situation has greatly improved over the years, many important problems are still open and new questions have arrived to the field, which still is an active field of study. Here I will review some of the main aspects related to the determination of gaseous abundances in PNe and some relevant results derived in the last five years, since the last IAU symposium on PNe.
As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal repr esentations by visualizing what two-stream models have learned in order to recognize actions in video. We show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncracies of training data and to explain failure cases of the system.
The learning rate is an information-theoretical quantity for bipartite Markov chains describing two coupled subsystems. It is defined as the rate at which transitions in the downstream subsystem tend to increase the mutual information between the two subsystems, and is bounded by the dissipation arising from these transitions. Its physical interpretation, however, is unclear, although it has been used as a metric for the sensing performance of the downstream subsystem. In this paper, we explore the behaviour of the learning rate for a number of simple model systems, establishing when and how its behaviour is distinct from the instantaneous mutual information between subsystems. In the simplest case, the two are almost equivalent. In more complex steady-state systems, the mutual information and the learning rate behave qualitatively distinctly, with the learning rate clearly now reflecting the rate at which the downstream system must update its information in response to changes in the upstream system. It is not clear whether this quantity is the most natural measure for sensor performance, and, indeed, we provide an example in which optimising the learning rate over a region of parameter space of the downstream system yields an apparently sub-optimal sensor.
Deep learning has led to significant improvement in text summarization with various methods investigated and improved ROUGE scores reported over the years. However, gaps still exist between summaries produced by automatic summarizers and human profes sionals. Aiming to gain more understanding of summarization systems with respect to their strengths and limits on a fine-grained syntactic and semantic level, we consult the Multidimensional Quality Metric(MQM) and quantify 8 major sources of errors on 10 representative summarization models manually. Primarily, we find that 1) under similar settings, extractive summarizers are in general better than their abstractive counterparts thanks to strength in faithfulness and factual-consistency; 2) milestone techniques such as copy, coverage and hybrid extractive/abstractive methods do bring specific improvements but also demonstrate limitations; 3) pre-training techniques, and in particular sequence-to-sequence pre-training, are highly effective for improving text summarization, with BART giving the best results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا