ترغب بنشر مسار تعليمي؟ اضغط هنا

Evolutionary Self-Replication as a Mechanism for Producing Artificial Intelligence

271   0   0.0 ( 0 )
 نشر من قبل Samuel Schmidgall
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Can reproduction alone in the context of survival produce intelligence in our machines? In this work, self-replication is explored as a mechanism for the emergence of intelligent behavior in modern learning environments. By focusing purely on survival, while undergoing natural selection, evolved organisms are shown to produce meaningful, complex, and intelligent behavior, demonstrating creative solutions to challenging problems without any notion of reward or objectives. Atari and robotic learning environments are re-defined in terms of natural selection, and the behavior which emerged in self-replicating organisms during these experiments is described in detail.

قيم البحث

اقرأ أيضاً

Current approaches to explaining the decisions of deep learning systems for medical tasks have focused on visualising the elements that have contributed to each decision. We argue that such approaches are not enough to open the black box of medical d ecision making systems because they are missing a key component that has been used as a standard communication tool between doctors for centuries: language. We propose a model-agnostic interpretability method that involves training a simple recurrent neural network model to produce descriptive sentences to clarify the decision of deep learning classifiers. We test our method on the task of detecting hip fractures from frontal pelvic x-rays. This process requires minimal additional labelling despite producing text containing elements that the original deep learning classification model was not specifically trained to detect. The experimental results show that: 1) the sentences produced by our method consistently contain the desired information, 2) the generated sentences are preferred by doctors compared to current tools that create saliency maps, and 3) the combination of visualisations and generated text is better than either alone.
Algorithms implementing populations of agents which interact with one another and sense their environment may exhibit emergent behavior such as self-organization and swarm intelligence. Here a swarm system, called Databionic swarm (DBS), is introduce d which is able to adapt itself to structures of high-dimensional data characterized by distance and/or density-based structures in the data space. By exploiting the interrelations of swarm intelligence, self-organization and emergence, DBS serves as an alternative approach to the optimization of a global objective function in the task of clustering. The swarm omits the usage of a global objective function and is parameter-free because it searches for the Nash equilibrium during its annealing process. To our knowledge, DBS is the first swarm combining these approaches. Its clustering can outperform common clustering methods such as K-means, PAM, single linkage, spectral clustering, model-based clustering, and Ward, if no prior knowledge about the data is available. A central problem in clustering is the correct estimation of the number of clusters. This is addressed by a DBS visualization called topographic map which allows assessing the number of clusters. It is known that all clustering algorithms construct clusters, irrespective of the data set contains clusters or not. In contrast to most other clustering algorithms, the topographic map identifies, that clustering of the data is meaningless if the data contains no (natural) clusters. The performance of DBS is demonstrated on a set of benchmark data, which are constructed to pose difficult clustering problems and in two real-world applications.
While Moores law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance. One of these avenues is the exploration of new alternative brain-inspired computing architecture s that promise to achieve the flexibility and computational efficiency of biological neural processing systems. Within this context, neuromorphic intelligence represents a paradigm shift in computing based on the implementation of spiking neural network architectures tightly co-locating processing and memory. In this paper, we provide a comprehensive overview of the field, highlighting the different levels of granularity present in existing silicon implementations, comparing approaches that aim at replicating natural intelligence (bottom-up) versus those that aim at solving practical artificial intelligence applications (top-down), and assessing the benefits of the different circuit design styles used to achieve these goals. First, we present the analog, mixed-signal and digital circuit design styles, identifying the boundary between processing and memory through time multiplexing, in-memory computation and novel devices. Next, we highlight the key tradeoffs for each of the bottom-up and top-down approaches, survey their silicon implementations, and carry out detailed comparative analyses to extract design guidelines. Finally, we identify both necessary synergies and missing elements required to achieve a competitive advantage for neuromorphic edge computing over conventional machine-learning accelerators, and outline the key elements for a framework toward neuromorphic intelligence.
The explosive increase in number of smart devices hosting sophisticated applications is rapidly affecting the landscape of information communication technology industry. Mobile subscriptions, expected to reach 8.9 billion by 2022, would drastically i ncrease the demand of extra capacity with aggregate throughput anticipated to be enhanced by a factor of 1000. In an already crowded radio spectrum, it becomes increasingly difficult to meet ever growing application demands of wireless bandwidth. It has been shown that the allocated spectrum is seldom utilized by the primary users and hence contains spectrum holes that may be exploited by the unlicensed users for their communication. As we enter the Internet Of Things (IoT) era in which appliances of common use will become smart digital devices with rigid performance requirements (such as low latency, energy efficiency, etc.), current networks face the vexing problem of how to create sufficient capacity for such applications. The fifth generation of cellular networks (5G) envisioned to address these challenges are thus required to incorporate cognition and intelligence to resolve the aforementioned issues.
Artificial neural networks (ANNs), while exceptionally useful for classification, are vulnerable to misdirection. Small amounts of noise can significantly affect their ability to correctly complete a task. Instead of generalizing concepts, ANNs seem to focus on surface statistical regularities in a given task. Here we compare how recurrent artificial neural networks, long short-term memory units, and Markov Brains sense and remember their environments. We show that information in Markov Brains is localized and sparsely distributed, while the other neural network substrates smear information about the environment across all nodes, which makes them vulnerable to noise.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا