ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed Fault Detection in Sensor Networks using a Recurrent Neural Network

143   0   0.0 ( 0 )
 نشر من قبل Oliver Obst
 تاريخ النشر 2009
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Oliver Obst




اسأل ChatGPT حول البحث

In long-term deployments of sensor networks, monitoring the quality of gathered data is a critical issue. Over the time of deployment, sensors are exposed to harsh conditions, causing some of them to fail or to deliver less accurate data. If such a degradation remains undetected, the usefulness of a sensor network can be greatly reduced. We present an approach that learns spatio-temporal correlations between different sensors, and makes use of the learned model to detect misbehaving sensors by using distributed computation and only local communication between nodes. We introduce SODESN, a distributed recurrent neural network architecture, and a learning method to train SODESN for fault detection in a distributed scenario. Our approach is evaluated using data from different types of sensors and is able to work well even with less-than-perfect link qualities and more than 50% of failed nodes.



قيم البحث

اقرأ أيضاً

98 - Giovanni Iacca 2018
Wireless Sensor Networks (WSNs) is an emerging technology in several application domains, ranging from urban surveillance to environmental and structural monitoring. Computational Intelligence (CI) techniques are particularly suitable for enhancing t hese systems. However, when embedding CI into wireless sensors, severe hardware limitations must be taken into account. In this paper we investigate the possibility to perform an online, distributed optimization process within a WSN. Such a system might be used, for example, to implement advanced network features like distributed modelling, self-optimizing protocols, and anomaly detection, to name a few. The proposed approach, called DOWSN (Distributed Optimization for WSN) is an island-model infrastructure in which each node executes a simple, computationally cheap (both in terms of CPU and memory) optimization algorithm, and shares promising solutions with its neighbors. We perform extensive tests of different DOWSN configurations on a benchmark made up of continuous optimization problems; we analyze the influence of the network parameters (number of nodes, inter-node communication period and probability of accepting incoming solutions) on the optimization performance. Finally, we profile energy and memory consumption of DOWSN to show the efficient usage of the limited hardware resources available on the sensor nodes.
153 - Yuan Gao 2016
This paper introduces two recurrent neural network structures called Simple Gated Unit (SGU) and Deep Simple Gated Unit (DSGU), which are general structures for learning long term dependencies. Compared to traditional Long Short-Term Memory (LSTM) an d Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM, which require more than one gates to control information flow in the network, SGU and DSGU only use one multiplicative gate to control the flow of information. We show that this difference can accelerate the learning speed in tasks that require long dependency information. We also show that DSGU is more numerically stable than SGU. In addition, we also propose a standard way of representing inner structure of RNN called RNN Conventional Graph (RCG), which helps analyzing the relationship between input units and hidden units of RNN.
209 - Wenrui Zhang , Peng Li 2021
In neural circuits, recurrent connectivity plays a crucial role in network function and stability. However, existing recurrent spiking neural networks (RSNNs) are often constructed by random connections without optimization. While RSNNs can produce r ich dynamics that are critical for memory formation and learning, systemic architectural optimization of RSNNs is still an opening challenge. We aim to enable systemic design of large RSNNs via a new scalable RSNN architecture and automated architectural optimization. We compose RSNNs based on a layer architecture called Sparsely-Connected Recurrent Motif Layer (SC-ML) that consists of multiple small recurrent motifs wired together by sparse lateral connections. The small size of the motifs and sparse inter-motif connectivity leads to an RSNN architecture scalable to large network sizes. We further propose a method called Hybrid Risk-Mitigating Architectural Search (HRMAS) to systematically optimize the topology of the proposed recurrent motifs and SC-ML layer architecture. HRMAS is an alternating two-step optimization process by which we mitigate the risk of network instability and performance degradation caused by architectural change by introducing a novel biologically-inspired self-repairing mechanism through intrinsic plasticity. The intrinsic plasticity is introduced to the second step of each HRMAS iteration and acts as unsupervised fast self-adaption to structural and synaptic weight modifications introduced by the first step during the RSNN architectural evolution. To the best of the authors knowledge, this is the first work that performs systematic architectural optimization of RSNNs. Using one speech and three neuromorphic datasets, we demonstrate the significant performance improvement brought by the proposed automated architecture optimization over existing manually-designed RSNNs.
Photonic Neural Network implementations have been gaining considerable attention as a potentially disruptive future technology. Demonstrating learning in large scale neural networks is essential to establish photonic machine learning substrates as vi able information processing systems. Realizing photonic Neural Networks with numerous nonlinear nodes in a fully parallel and efficient learning hardware was lacking so far. We demonstrate a network of up to 2500 diffractively coupled photonic nodes, forming a large scale Recurrent Neural Network. Using a Digital Micro Mirror Device, we realize reinforcement learning. Our scheme is fully parallel, and the passive weights maximize energy efficiency and bandwidth. The computational output efficiently converges and we achieve very good performance.
The most common approach to mitigate the impact that the presence of malicious nodes has on the accuracy of decision fusion schemes consists in observing the behavior of the nodes over a time interval T and then removing the reports of suspect nodes from the fusion process. By assuming that some a-priori information about the presence of malicious nodes and their behavior is available, we show that the information stemming from the suspect nodes can be exploited to further improve the decision fusion accuracy. Specifically, we derive the optimum fusion rule and analyze the achievable performance for two specific cases. In the first case, the states of the nodes (corrupted or honest) are independent of each other and the fusion center knows only the probability that a node is malicious. In the second case, the exact number of corrupted nodes is fixed and known to the fusion center. We also investigate the optimum corruption strategy for the malicious nodes, showing that always reverting the local decision does not necessarily maximize the loss of performance at the fusion center.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا