ترغب بنشر مسار تعليمي؟ اضغط هنا

A brief history of AI: how to prevent another winter (a critical review)

69   0   0.0 ( 0 )
 نشر من قبل Amirhosein Toosi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AIs path has never been smooth, having essentially fallen apart twice in its lifetime (winters of AI), both after periods of popular success (summers of AI). We provide a brief rundown of AIs evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another winter.

قيم البحث

اقرأ أيضاً

A smart city can be seen as a framework, comprised of Information and Communication Technologies (ICT). An intelligent network of connected devices that collect data with their sensors and transmit them using cloud technologies in order to communicat e with other assets in the ecosystem plays a pivotal role in this framework. Maximizing the quality of life of citizens, making better use of resources, cutting costs, and improving sustainability are the ultimate goals that a smart city is after. Hence, data collected from connected devices will continuously get thoroughly analyzed to gain better insights into the services that are being offered across the city; with this goal in mind that they can be used to make the whole system more efficient. Robots and physical machines are inseparable parts of a smart city. Embodied AI is the field of study that takes a deeper look into these and explores how they can fit into real-world environments. It focuses on learning through interaction with the surrounding environment, as opposed to Internet AI which tries to learn from static datasets. Embodied AI aims to train an agent that can See (Computer Vision), Talk (NLP), Navigate and Interact with its environment (Reinforcement Learning), and Reason (General Intelligence), all at the same time. Autonomous driving cars and personal companions are some of the examples that benefit from Embodied AI nowadays. In this paper, we attempt to do a concise review of this field. We will go through its definitions, its characteristics, and its current achievements along with different algorithms, approaches, and solutions that are being used in different components of it (e.g. Vision, NLP, RL). We will then explore all the available simulators and 3D interactable databases that will make the research in this area feasible. Finally, we will address its challenges and identify its potentials for future research.
136 - Daniel C. Elton 2020
The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is often possible to approximate the input-output relations of deep neu ral networks with a few human-understandable rules, the discovery of the double descent phenomena suggests that such approximations do not accurately capture the mechanism by which deep neural networks work. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result, neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate. To show how we might be able to trust AI despite these problems we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. For this approach to work, it is important that the explanation actually be related to the decision, ideally capturing the mechanism used to arrive at the explanation. Finally, we argue it is important that deep learning based systems include a warning light based on techniques from applicability domain analysis to warn the user if a model is asked to extrapolate outside its training distribution. For a video presentation of this talk see https://www.youtube.com/watch?v=Py7PVdcu7WY& .
59 - Huimin Peng 2021
This paper briefly reviews the history of meta-learning and describes its contribution to general AI. Meta-learning improves model generalization capacity and devises general algorithms applicable to both in-distribution and out-of-distribution tasks potentially. General AI replaces task-specific models with general algorithmic systems introducing higher level of automation in solving diverse tasks using AI. We summarize main contributions of meta-learning to the developments in general AI, including memory module, meta-learner, coevolution, curiosity, forgetting and AI-generating algorithm. We present connections between meta-learning and general AI and discuss how meta-learning can be used to formulate general AI algorithms.
130 - C Sivaram 2008
Gurzadyan-Xue Dark Energy was derived in 1986 (twenty years before the paper of Gurzadyan-Xue). The paper by the present author, titled The Planck Length as a Cosmological Constant, published in Astrophysics Space Science, Vol. 127, p.133-137, 1986 c ontains the formula claimed to have been derived by Gurzadyan-Xue (in 2003).
The idea of breaking time-translation symmetry has fascinated humanity at least since ancient proposals of the perpetuum mobile. Unlike the breaking of other symmetries, such as spatial translation in a crystal or spin rotation in a magnet, time tran slation symmetry breaking (TTSB) has been tantalisingly elusive. We review this history up to recent developments which have shown that discrete TTSB does takes place in periodically driven (Floquet) systems in the presence of many-body localization. Such Floquet time-crystals represent a new paradigm in quantum statistical mechanics --- that of an intrinsically out-of-equilibrium many-body phase of matter. We include a compendium of necessary background, before specializing to a detailed discussion of the nature, and diagnostics, of TTSB. We formalize the notion of a time-crystal as a stable, macroscopic, conservative clock --- explaining both the need for a many-body system in the infinite volume limit, and for a lack of net energy absorption or dissipation. We also cover a range of related phenomena, including various types of long-lived prethermal time-crystals, and expose the roles played by symmetries -- exact and (emergent) approximate -- and their breaking. We clarify the distinctions between many-body time-crystals and other ostensibly similar phenomena dating as far back as the works of Faraday and Mathieu. En route, we encounter Wilczeks suggestion that macroscopic systems should exhibit TTSB in their ground states, together with a theorem ruling this out. We also analyze pioneering recent experiments detecting signatures of time crystallinity in a variety of different platforms, and provide a detailed theoretical explanation of the physics in each case. In all existing experiments, the system does not realize a `true time-crystal phase, and we identify necessary ingredients for improvements in future experiments.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا