ﻻ يوجد ملخص باللغة العربية
Much of neuroscience aims at reverse engineering the brain, but we only record a small number of neurons at a time. We do not currently know if reverse engineering the brain requires us to simultaneously record most neurons or if multiple recordings from smaller subsets suffice. This is made even more important by the development of novel techniques that allow recording from selected subsets of neurons, e.g. using optical techniques. To get at this question, we analyze a neural network, trained on the MNIST dataset, using only partial recordings and characterize the dependency of the quality of our reverse engineering on the number of simultaneously recorded neurons. We find that reverse engineering of the nonlinear neural network is meaningfully possible if a sufficiently large number of neurons is simultaneously recorded but that this number can be considerably smaller than the number of neurons. Moreover, recording many times from small random subsets of neurons yields surprisingly good performance. Application in neuroscience suggests to approximate the I/O function of an actual neural system, we need to record from a much larger number of neurons. The kind of scaling analysis we perform here can, and arguably should be used to calibrate approaches that can dramatically scale up the size of recorded data sets in neuroscience.
A major goal in neuroscience is to understand how populations of neurons code for stimuli or actions. While the number of neurons that can be recorded simultaneously is increasing at a fast pace, in most cases these recordings cannot access a complet
Reverse-engineering bar charts extracts textual and numeric information from the visual representations of bar charts to support application scenarios that require the underlying information. In this paper, we propose a neural network-based method fo
Neural networks have been shown to be vulnerable against fault injection attacks. These attacks change the physical behavior of the device during the computation, resulting in a change of value that is currently being computed. They can be realized b
It has been widely assumed that a neural network cannot be recovered from its outputs, as the network depends on its parameters in a highly nonlinear way. Here, we prove that in fact it is often possible to identify the architecture, weights, and bia
Understanding the localization properties of eigenvectors of complex networks is important to get insight into various structural and dynamical properties of the corresponding systems. Here, we analytically develop a scheme to construct a highly loca