ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards ML Engineering: A Brief History Of TensorFlow Extended (TFX)

309   0   0.0 ( 0 )
 نشر من قبل Jarek Wilkiewicz
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Software Engineering, as a discipline, has matured over the past 5+ decades. The modern world heavily depends on it, so the increased maturity of Software Engineering was an eventuality. Practices like testing and reliable technologies help make Software Engineering reliable enough to build industries upon. Meanwhile, Machine Learning (ML) has also grown over the past 2+ decades. ML is used more and more for research, experimentation and production workloads. ML now commonly powers widely-used products integral to our lives. But ML Engineering, as a discipline, has not widely matured as much as its Software Engineering ancestor. Can we take what we have learned and help the nascent field of applied ML evolve into ML Engineering the way Programming evolved into Software Engineering [1]? In this article we will give a whirlwind tour of Sibyl [2] and TensorFlow Extended (TFX) [3], two successive end-to-end (E2E) ML platforms at Alphabet. We will share the lessons learned from over a decade of applied ML built on these platforms, explain both their similarities and their differences, and expand on the shifts (both mental and technical) that helped us on our journey. In addition, we will highlight some of the capabilities of TFX that help realize several aspects of ML Engineering. We argue that in order to unlock the gains ML can bring, organizations should advance the maturity of their ML teams by investing in robust ML infrastructure and promoting ML Engineering education. We also recommend that before focusing on cutting-edge ML modeling techniques, product leaders should invest more time in adopting interoperable ML platforms for their organizations. In closing, we will also share a glimpse into the future of TFX.

قيم البحث

اقرأ أيضاً

519 - Ramy Shahin 2021
In this paper we introduce the notion of Modal Software Engineering: automatically turning sequential, deterministic programs into semantically equivalent programs efficiently operating on inputs coming from multiple overlapping worlds. We are drawin g an analogy between modal logics, and software application domains where multiple sets of inputs (multiple worlds) need to be processed efficiently. Typically those sets highly overlap, so processing them independently would involve a lot of redundancy, resulting in lower performance, and in many cases intractability. Three application domains are presented: reasoning about feature-based variability of Software Product Lines (SPLs), probabilistic programming, and approximate programming.
130 - C Sivaram 2008
Gurzadyan-Xue Dark Energy was derived in 1986 (twenty years before the paper of Gurzadyan-Xue). The paper by the present author, titled The Planck Length as a Cosmological Constant, published in Astrophysics Space Science, Vol. 127, p.133-137, 1986 c ontains the formula claimed to have been derived by Gurzadyan-Xue (in 2003).
The idea of breaking time-translation symmetry has fascinated humanity at least since ancient proposals of the perpetuum mobile. Unlike the breaking of other symmetries, such as spatial translation in a crystal or spin rotation in a magnet, time tran slation symmetry breaking (TTSB) has been tantalisingly elusive. We review this history up to recent developments which have shown that discrete TTSB does takes place in periodically driven (Floquet) systems in the presence of many-body localization. Such Floquet time-crystals represent a new paradigm in quantum statistical mechanics --- that of an intrinsically out-of-equilibrium many-body phase of matter. We include a compendium of necessary background, before specializing to a detailed discussion of the nature, and diagnostics, of TTSB. We formalize the notion of a time-crystal as a stable, macroscopic, conservative clock --- explaining both the need for a many-body system in the infinite volume limit, and for a lack of net energy absorption or dissipation. We also cover a range of related phenomena, including various types of long-lived prethermal time-crystals, and expose the roles played by symmetries -- exact and (emergent) approximate -- and their breaking. We clarify the distinctions between many-body time-crystals and other ostensibly similar phenomena dating as far back as the works of Faraday and Mathieu. En route, we encounter Wilczeks suggestion that macroscopic systems should exhibit TTSB in their ground states, together with a theorem ruling this out. We also analyze pioneering recent experiments detecting signatures of time crystallinity in a variety of different platforms, and provide a detailed theoretical explanation of the physics in each case. In all existing experiments, the system does not realize a `true time-crystal phase, and we identify necessary ingredients for improvements in future experiments.
In 2015, the New Horizons spacecraft flew past Pluto and its moon Charon, providing the first clear look at the surface of Charon. New Horizons images revealed an ancient surface, a large, intricate canyon system, and many fractures, among other geol ogic features. Here, we assess whether tidal stresses played a significant role in the formation of tensile fractures on Charon. Although presently in a circular orbit, most scenarios for the orbital evolution of Charon include an eccentric orbit for some period of time and possibly an internal ocean. Past work has shown that these conditions could have generated stresses comparable in magnitude to other tidally fractured moons, such as Europa and Enceladus. However, we find no correlation between observed fracture orientations and those predicted to form due to eccentricity-driven tidal stress. It thus seems more likely that the orbit of Charon circularized before its ocean froze, and that either tidal stresses alone were insufficient to fracture the surface or subsequent resurfacing remove these ancient fractures.
We present a brief history of the field of interpretable machine learning (IML), give an overview of state-of-the-art interpretation methods, and discuss challenges. Research in IML has boomed in recent years. As young as the field is, it has over 20 0 years old roots in regression modeling and rule-based machine learning, starting in the 1960s. Recently, many new IML methods have been proposed, many of them model-agnostic, but also interpretation techniques specific to deep learning and tree-based ensembles. IML methods either directly analyze model components, study sensitivity to input perturbations, or analyze local or global surrogate approximations of the ML model. The field approaches a state of readiness and stability, with many methods not only proposed in research, but also implemented in open-source software. But many important challenges remain for IML, such as dealing with dependent features, causal interpretation, and uncertainty estimation, which need to be resolved for its successful application to scientific problems. A further challenge is a missing rigorous definition of interpretability, which is accepted by the community. To address the challenges and advance the field, we urge to recall our roots of interpretable, data-driven modeling in statistics and (rule-based) ML, but also to consider other areas such as sensitivity analysis, causal inference, and the social sciences.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا