ترغب بنشر مسار تعليمي؟ اضغط هنا

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

68   0   0.0 ( 0 )
 نشر من قبل Malte J. Rasch
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce the IBM Analog Hardware Acceleration Kit, a new and first of a kind open source toolkit to simulate analog crossbar arrays in a convenient fashion from within PyTorch (freely available at https://github.com/IBM/aihwkit). The toolkit is under active development and is centered around the concept of an analog tile which captures the computations performed on a crossbar array. Analog tiles are building blocks that can be used to extend existing network modules with analog components and compose arbitrary artificial neural networks (ANNs) using the flexibility of the PyTorch framework. Analog tiles can be conveniently configured to emulate a plethora of different analog hardware characteristics and their non-idealities, such as device-to-device and cycle-to-cycle variations, resistive device response curves, and weight and output noise. Additionally, the toolkit makes it possible to design custom unit cell configurations and to use advanced analog optimization algorithms such as Tiki-Taka. Moreover, the backward and update behavior can be set to ideal to enable hardware-aware training features for chips that target inference acceleration only. To evaluate the inference accuracy of such chips over time, we provide statistical programming noise and drift models calibrated on phase-change memory hardware. Our new toolkit is fully GPU accelerated and can be used to conveniently estimate the impact of material properties and non-idealities of future analog technology on the accuracy for arbitrary ANNs.



قيم البحث

اقرأ أيضاً

59 - Yu Pan , Maolin Wang , Zenglin Xu 2021
Tensor Decomposition Networks(TDNs) prevail for their inherent compact architectures. For providing convenience, we present a toolkit named TedNet that is based on the Pytorch framework, to give more researchers a flexible way to exploit TDNs. TedNet implements 5 kinds of tensor decomposition(i.e., CANDECOMP/PARAFAC(CP), Block-Term Tucker(BT), Tucker-2, Tensor Train(TT) and Tensor Ring(TR)) on traditional deep neural layers, the convolutional layer and the fully-connected layer. By utilizing these basic layers, it is simple to construct a variety of TDNs like TR-ResNet, TT-LSTM, etc. TedNet is available at https://github.com/tnbar/tednet.
In this work we propose an effective preconditioning technique to accelerate the steady-state simulation of large-scale memristor crossbar arrays (MCAs). We exploit the structural regularity of MCAs to develop a specially-crafted preconditioner that can be efficiently evaluated utilizing tensor products and block matrix inversion. Numerical experiments demonstrate the efficacy of the proposed technique compared to mainstream preconditioners.
Scientists and engineers employ stochastic numerical simulators to model empirically observed phenomena. In contrast to purely statistical models, simulators express scientific principles that provide powerful inductive biases, improve generalization to new data or scenarios and allow for fewer, more interpretable and domain-relevant parameters. Despite these advantages, tuning a simulators parameters so that its outputs match data is challenging. Simulation-based inference (SBI) seeks to identify parameter sets that a) are compatible with prior knowledge and b) match empirical observations. Importantly, SBI does not seek to recover a single best data-compatible parameter set, but rather to identify all high probability regions of parameter space that explain observed data, and thereby to quantify parameter uncertainty. In Bayesian terminology, SBI aims to retrieve the posterior distribution over the parameters of interest. In contrast to conventional Bayesian inference, SBI is also applicable when one can run model simulations, but no formula or algorithm exists for evaluating the probability of data given parameters, i.e. the likelihood. We present $texttt{sbi}$, a PyTorch-based package that implements SBI algorithms based on neural networks. $texttt{sbi}$ facilitates inference on black-box simulators for practising scientists and engineers by providing a unified interface to state-of-the-art algorithms together with documentation and tutorials.
We develop a highly efficient numerical method to simulate small-amplitude flapping propulsion by a flexible wing in a nearly inviscid fluid. We allow the wings elastic modulus and mass density to vary arbitrarily, with an eye towards optimizing thes e distributions for propulsive performance. The method to determine the wing kinematics is based on Chebyshev collocation of the 1D beam equation as coupled to the surrounding 2D fluid flow. Through small-amplitude analysis of the Euler equations (with trailing-edge vortex shedding), the complete hydrodynamics can be represented by a nonlocal operator that acts on the 1D wing kinematics. A class of semi-analytical solutions permits fast evaluation of this operator with $O(N log N)$ operations, where $N$ is the number of collocation points on the wing. This is in contrast to the minimum $O(N^2)$ operations required by a direct 2D fluid solver. The coupled wing-fluid problem is thus recast as a PDE with nonlocal operator, which we solve using a preconditioned iterative method. These techniques yield a solver of near-optimal complexity, $O(N log N)$, allowing one to rapidly search the infinite-dimensional parameter space of all possible material distributions and even perform optimization over this space.
Limbo is an open-source C++11 library for Bayesian optimization which is designed to be both highly flexible and very fast. It can be used to optimize functions for which the gradient is unknown, evaluations are expensive, and runtime cost matters (e .g., on embedded systems or robots). Benchmarks on standard functions show that Limbo is about 2 times faster than BayesOpt (another C++ library) for a similar accuracy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا