ترغب بنشر مسار تعليمي؟ اضغط هنا

We review attempts that have been made towards understanding the computational properties and mechanisms of input-driven dynamical systems like RNNs, and reservoir computing networks in particular. We provide details on methods that have been develop ed to give quantitative answers to the questions above. Following this, we show how self-organization may be used to improve reservoirs for better performance, in some cases guided by the measures presented before. We also present a possible way to quantify task performance using an information-theoretic approach, and finally discuss promising future directions aimed at a better understanding of how these systems perform their computations and how to best guide self-organized processes for their optimization.
This work describes preliminary steps towards nano-scale reservoir computing using quantum dots. Our research has focused on the development of an accumulator-based sensing system that reacts to changes in the environment, as well as the development of a software simulation. The investigated systems generate nonlinear responses to inputs that make them suitable for a physical implementation of a neural network. This development will enable miniaturisation of the neurons to the molecular level, leading to a range of applications including monitoring of changes in materials or structures. The system is based around the optical properties of quantum dots. The paper will report on experimental work on systems using Cadmium Selenide (CdSe) quantum dots and on the various methods to render the systems sensitive to pH, redox potential or specific ion concentration. Once the quantum dot-based systems are rendered sensitive to these triggers they can provide a distributed array that can monitor and transmit information on changes within the material.
Information theory and the framework of information dynamics have been used to provide tools to characterise complex systems. In particular, we are interested in quantifying information storage, information modification and information transfer as ch aracteristic elements of computation. Although these quantities are defined for autonomous dynamical systems, information dynamics can also help to get a wholistic understanding of input-driven systems such as neural networks. In this case, we do not distinguish between the system itself, and the effects the input has to the system. This may be desired in some cases, but it will change the questions we are able to answer, and is consequently an important consideration, for example, for biological systems which perform non-trivial computations and also retain a short-term memory of past inputs. Many other real world systems like cortical networks are also heavily input-driven, and application of tools designed for autonomous dynamic systems may not necessarily lead to intuitively interpretable results. The aim of our work is to extend the measurements used in the information dynamics framework for input-driven systems. Using the proposed input-corrected information storage we hope to better quantify system behaviour, which will be important for heavily input-driven systems like artificial neural networks to abstract from specific benchmarks, or for brain networks, where intervention is difficult, individual components cannot be tested in isolation or with arbitrary input data.
The RoboCup 2D Simulation League incorporates several challenging features, setting a benchmark for Artificial Intelligence (AI). In this paper we describe some of the ideas and tools around the development of our team, Gliders2012. In our descriptio n, we focus on the evaluation function as one of our central mechanisms for action selection. We also point to a new framework for watching log files in a web browser that we release for use and further development by the RoboCup community. Finally, we also summarize results of the group and final matches we played during RoboCup 2012, with Gliders2012 finishing 4th out of 19 teams.
95 - Oliver Obst 2009
In long-term deployments of sensor networks, monitoring the quality of gathered data is a critical issue. Over the time of deployment, sensors are exposed to harsh conditions, causing some of them to fail or to deliver less accurate data. If such a d egradation remains undetected, the usefulness of a sensor network can be greatly reduced. We present an approach that learns spatio-temporal correlations between different sensors, and makes use of the learned model to detect misbehaving sensors by using distributed computation and only local communication between nodes. We introduce SODESN, a distributed recurrent neural network architecture, and a learning method to train SODESN for fault detection in a distributed scenario. Our approach is evaluated using data from different types of sensors and is able to work well even with less-than-perfect link qualities and more than 50% of failed nodes.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا