ترغب بنشر مسار تعليمي؟ اضغط هنا

Temporal State Machines: Using temporal memory to stitch time-based graph computations

63   0   0.0 ( 0 )
 نشر من قبل Advait Madhavan
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Race logic, an arrival-time-coded logic family, has demonstrated energy and performance improvements for applications ranging from dynamic programming to machine learning. However, the ad hoc mappings of algorithms into hardware result in custom architectures making them difficult to generalize. We systematize the development of race logic by associating it with the mathematical field called tropical algebra. This association between the mathematical primitives of tropical algebra and generalized race logic computations guides the design of temporally coded tropical circuits. It also serves as a framework for expressing high level timing-based algorithms. This abstraction, when combined with temporal memory, allows for the systematic generalization of race logic by making it possible to partition feed-forward computations into stages and organizing them into a state machine. We leverage analog memristor-based temporal memories to design a such a state machine that operates purely on time-coded wavefronts. We implement a version of Dijkstras algorithm to evaluate this temporal state machine. This demonstration shows the promise of expanding the expressibility of temporal computing to enable it to deliver significant energy and throughput advantages.



قيم البحث

اقرأ أيضاً

We extend the reach of temporal computing schemes by developing a memory for multi-channel temporal patterns or wavefronts. This temporal memory re-purposes conventional one-transistor-one-resistor (1T1R) memristor crossbars for use in an arrival-tim e coded, single-event-per-wire temporal computing environment. The memristor resistances and the associated circuit capacitances provide the necessary time constants, enabling the memory array to store and retrieve wavefronts. The retrieval operation of such a memory is naturally in the temporal domain and the resulting wavefronts can be used to trigger time-domain computations. While recording the wavefronts can be done using standard digital techniques, that approach has substantial translation costs between temporal and digital domains. To avoid these costs, we propose a spike timing dependent plasticity (STDP) inspired wavefront recording scheme to capture incoming wavefronts. We simulate these designs with experimentally validated memristor models and analyze the effects of memristor non-idealities on the operation of such a memory.
The emerging brain-inspired computing paradigm known as hyperdimensional computing (HDC) has been proven to provide a lightweight learning framework for various cognitive tasks compared to the widely used deep learning-based approaches. Spatio-tempor al (ST) signal processing, which encompasses biosignals such as electromyography (EMG) and electroencephalography (EEG), is one family of applications that could benefit from an HDC-based learning framework. At the core of HDC lie manipulations and comparisons of large bit patterns, which are inherently ill-suited to conventional computing platforms based on the von-Neumann architecture. In this work, we propose an architecture for ST signal processing within the HDC framework using predominantly in-memory compute arrays. In particular, we introduce a methodology for the in-memory hyperdimensional encoding of ST data to be used together with an in-memory associative search module. We show that the in-memory HDC encoder for ST signals offers at least 1.80x energy efficiency gains, 3.36x area gains, as well as 9.74x throughput gains compared with a dedicated digital hardware implementation. At the same time it achieves a peak classification accuracy within 0.04% of that of the baseline HDC framework.
Many road accidents occur due to distracted drivers. Today, driver monitoring is essential even for the latest autonomous vehicles to alert distracted drivers in order to take over control of the vehicle in case of emergency. In this paper, a spatio- temporal approach is applied to classify drivers distraction level and movement decisions using convolutional neural networks (CNNs). We approach this problem as action recognition to benefit from temporal information in addition to spatial information. Our approach relies on features extracted from sparsely selected frames of an action using a pre-trained BN-Inception network. Experiments show that our approach outperforms the state-of-the art results on the Distracted Driver Dataset (96.31%), with an accuracy of 99.10% for 10-class classification while providing real-time performance. We also analyzed the impact of fusion using RGB and optical flow modalities with a very recent data level fusion strategy. The results on the Distracted Driver and Brain4Cars datasets show that fusion of these modalities further increases the accuracy.
We propose a dedicated winner-take-all circuit to efficiently implement the intra-column competition between cells in Hierarchical Temporal Memory which is a crucial part of the algorithm. All inputs and outputs are charge-based for compatibility wit h standard CMOS. The circuit incorporates memristors for competitive advantage to emulate a column with a cell in a predictive state. The circuit can also detect columns bursting by passive averaging and comparison of the cell outputs. The proposed spintronic devices and circuit are thoroughly described and a series of simulations are used to predict the performance. The simulations indicate that the circuit can complete a nine-cell, nine-input competition operation in under 15 ns at a cost of about 25 pJ.
Extracting spatial-temporal knowledge from data is useful in many applications. It is important that the obtained knowledge is human-interpretable and amenable to formal analysis. In this paper, we propose a method that trains neural networks to lear n spatial-temporal properties in the form of weighted graph-based signal temporal logic (wGSTL) formulas. For learning wGSTL formulas, we introduce a flexible wGSTL formula structure in which the users preference can be applied in the inferred wGSTL formulas. In the proposed framework, each neuron of the neural networks corresponds to a subformula in a flexible wGSTL formula structure. We initially train a neural network to learn the wGSTL operators and then train a second neural network to learn the parameters in a flexible wGSTL formula structure. We use a COVID-19 dataset and a rain prediction dataset to evaluate the performance of the proposed framework and algorithms. We compare the performance of the proposed framework with three baseline classification methods including K-nearest neighbors, decision trees, and artificial neural networks. The classification accuracy obtained by the proposed framework is comparable with the baseline classification methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا