No Arabic abstract
Deep convolutional neural networks (DCNNs) have revolutionized computer vision and are often advocated as good models of the human visual system. However, there are currently many shortcomings of DCNNs, which preclude them as a model of human vision. For example, in the case of adversarial attacks, where adding small amounts of noise to an image, including an object, can lead to strong misclassification of that object. But for humans, the noise is often invisible. If vulnerability to adversarial noise cannot be fixed, DCNNs cannot be taken as serious models of human vision. Many studies have tried to add features of the human visual system to DCNNs to make them robust against adversarial attacks. However, it is not fully clear whether human vision inspired components increase robustness because performance evaluations of these novel components in DCNNs are often inconclusive. We propose a set of criteria for proper evaluation and analyze different models according to these criteria. We finally sketch future efforts to make DCCNs one step closer to the model of human vision.
Researchers traditionally solve the computational problems through rigorous and deterministic algorithms called as Hard Computing. These precise algorithms have widely been realized using digital technology as an inherently reliable and accurate implementation platform, either in hardware or software forms. This rigid form of implementation which we refer as Hard Realization relies on strict algorithmic accuracy constraints dictated to digital design engineers. Hard realization admits paying as much as necessary implementation costs to preserve computation precision and determinism throughout all the design and implementation steps. Despite its prior accomplishments, this conventional paradigm has encountered serious challenges with todays emerging applications and implementation technologies. Unlike traditional hard computing, the emerging soft and bio-inspired algorithms do not rely on fully precise and deterministic computation. Moreover, the incoming nanotechnologies face increasing reliability issues that prevent them from being efficiently exploited in hard realization of applications. This article examines Soft Realization, a novel bio-inspired approach to design and implementation of an important category of applications noticing the internal brain structure. The proposed paradigm mitigates major weaknesses of hard realization by (1) alleviating incompatibilities with todays soft and bio-inspired algorithms such as artificial neural networks, fuzzy systems, and human sense signal processing applications, and (2) resolving the destructive inconsistency with unreliable nanotechnologies. Our experimental results on a set of well-known soft applications implemented using the proposed soft realization paradigm in both reliable and unreliable technologies indicate that significant energy, delay, and area savings can be obtained compared to the conventional implementation.
Event-based cameras are vision devices that transmit only brightness changes with low latency and ultra-low power consumption. Such characteristics make event-based cameras attractive in the field of localization and object tracking in resource-constrained systems. Since the number of generated events in such cameras is huge, the selection and filtering of the incoming events are beneficial from both increasing the accuracy of the features and reducing the computational load. In this paper, we present an algorithm to detect asynchronous corners from a stream of events in real-time on embedded systems. The algorithm is called the Three Layer Filtering-Harris or TLF-Harris algorithm. The algorithm is based on an events filtering strategy whose purpose is 1) to increase the accuracy by deliberately eliminating some incoming events, i.e., noise, and 2) to improve the real-time performance of the system, i.e., preserving a constant throughput in terms of input events per second, by discarding unnecessary events with a limited accuracy loss. An approximation of the Harris algorithm, in turn, is used to exploit its high-quality detection capability with a low-complexity implementation to enable seamless real-time performance on embedded computing platforms. The proposed algorithm is capable of selecting the best corner candidate among neighbors and achieves an average execution time savings of 59 % compared with the conventional Harris score. Moreover, our approach outperforms the competing methods, such as eFAST, eHarris, and FA-Harris, in terms of real-time performance, and surpasses Arc* in terms of accuracy.
Though sunlight is by far the most abundant renewable energy source available to humanity, its dilute and variable nature has kept efficient ways to collect, store, and distribute this energy tantalisingly out of reach. Turning the incoherent energy supply of sunlight into a coherent laser beam would overcome several practical limitations inherent in using sunlight as a source of clean energy: laser beams travel nearly losslessly over large distances, and they are effective at driving chemical reactions which convert sunlight into chemical energy. Here we propose a bio-inspired blueprint for a novel type of laser with the aim of upgrading unconcentrated natural sunlight into a coherent laser beam. Our proposed design constitutes an improvement of several orders of magnitude over existing comparable technologies: state-of-the-art solar pumped lasers operate above 1000 suns (corresponding to 1000 times the natural sunlight power). In order to achieve lasing with the extremely dilute power provided by sunlight, we here propose a laser medium comprised of molecular aggregates inspired by the architecture of photosynthetic complexes. Such complexes, by exploiting a highly symmetric arrangement of molecules organized in a hierarchy of energy scales, exhibit a very large internal efficiency in harvesting photons from a power source as dilute as natural sunlight. Specifically, we consider substituting the reaction center of photosynthetic complexes in purple bacteria with a suitably engineered molecular dimer composed of two strongly coupled chromophores. We show that if pumped by the surrounding photosynthetic complex, which efficiently collects and concentrates solar energy, the core dimer structure can reach population inversion, and reach the lasing threshold under natural sunlight. The design principles proposed here will also pave the way for developing other bio-inspired quantum devices.
Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we propose the first quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exists shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.
The reconstruction mechanisms built by the human auditory system during sound reconstruction are still a matter of debate. The purpose of this study is to propose a mathematical model of sound reconstruction based on the functional architecture of the auditory cortex (A1). The model is inspired by the geometrical modelling of vision, which has undergone a great development in the last ten years. There are however fundamental dissimilarities, due to the different role played by the time and the different group of symmetries. The algorithm transforms the degraded sound in an image in the time-frequency domain via a short-time Fourier transform. Such an image is then lifted in the Heisenberg group and it is reconstructed via a Wilson-Cowan differo-integral equation. Preliminary numerical experiments are provided, showing the good reconstruction properties of the algorithm on synthetic sounds concentrated around two frequencies.