Do you want to publish a course? Click here

Low-complexity Architecture for AR(1) Inference

144   0   0.0 ( 0 )
 Added by Renato J Cintra
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this Letter, we propose a low-complexity estimator for the correlation coefficient based on the signed $operatorname{AR}(1)$ process. The introduced approximation is suitable for implementation in low-power hardware architectures. Monte Carlo simulations reveal that the proposed estimator performs comparably to the competing methods in literature with maximum error in order of $10^{-2}$. However, the hardware implementation of the introduced method presents considerable advantages in several relevant metrics, offering more than 95% reduction in dynamic power and doubling the maximum operating frequency when compared to the reference method.



rate research

Read More

This paper introduces a collection of scaling methods for generating $2N$-point DCT-II approximations based on $N$-point low-complexity transformations. Such scaling is based on the Hou recursive matrix factorization of the exact $2N$-point DCT-II matrix. Encompassing the widely employed Jridi-Alfalou-Meher scaling method, the proposed techniques are shown to produce DCT-II approximations that outperform the transforms resulting from the JAM scaling method according to total error energy and mean squared error. Orthogonality conditions are derived and an extensive error analysis based on statistical simulation demonstrates the good performance of the introduced scaling methods. A hardware implementation is also provided demonstrating the competitiveness of the proposed methods when compared to the JAM scaling method.
Deep neural networks have become the standard approach to building reliable Natural Language Processing (NLP) applications, ranging from Neural Machine Translation (NMT) to dialogue systems. However, improving accuracy by increasing the model size requires a large number of hardware computations, which can slow down NLP applications significantly at inference time. To address this issue, we propose a novel vector-vector-matrix architecture (VVMA), which greatly reduces the latency at inference time for NMT. This architecture takes advantage of specialized hardware that has low-latency vector-vector operations and higher-latency vector-matrix operations. It also reduces the number of parameters and FLOPs for virtually all models that rely on efficient matrix multipliers without significantly impacting accuracy. We present empirical results suggesting that our framework can reduce the latency of sequence-to-sequence and Transformer models used for NMT by a factor of four. Finally, we show evidence suggesting that our VVMA extends to other domains, and we discuss novel hardware for its efficient use.
The Large Intelligent Surface (LIS) concept has emerged recently as a new paradigm for wireless communication, remote sensing and positioning. It consists of a continuous radiating surface placed relatively close to the users, which is able to communicate with users by independent transmission and reception (replacing base stations). Despite of its potential, there are a lot of challenges from an implementation point of view, with the interconnection data-rate and computational complexity being the most relevant. Distributed processing techniques and hierarchical architectures are expected to play a vital role addressing this while ensuring scalability. In this paper we perform algorithm-architecture codesign and analyze the hardware requirements and architecture trade-offs for a discrete LIS to perform uplink detection. By doing this, we expect to give concrete case studies and guidelines for efficient implementation of LIS systems.
A general asymptotic theory is given for the panel data AR(1) model with time series independent in different cross sections. The theory covers the cases of stationary process, nearly non-stationary process, unit root process, mildly integrated, mildly explosive and explosive processes. It is assumed that the cross-sectional dimension and time-series dimension are respectively $N$ and $T$. The results in this paper illustrate that whichever the process is, with an appropriate regularization, the least squares estimator of the autoregressive coefficient converges to a normal distribution with rate at least $O(N^{-1/3})$. Since the variance is the key to characterize the normal distribution, it is important to discuss the variance of the least squares estimator. We will show that when the autoregressive coefficient $rho$ satisfies $|rho|<1$, the variance declines at the rate $O((NT)^{-1/2})$, while the rate changes to $O(N^{-1/2}T^{-1})$ when $rho=1$ and $O(N^{-1/2}rho^{-T+2})$ when $|rho|>1$. $rho=1$ is the critical point where the convergence rate changes radically. The transition process is studied by assuming $rho$ depending on $T$ and going to $1$. An interesting phenomenon discovered in this paper is that, in the explosive case, the least squares estimator of the autoregressive coefficient has a standard normal limiting distribution in panel data case while it may not has a limiting distribution in univariate time series case.
234 - Tse-Wei Chen , Wei Tao , Deyu Wang 2021
In order to handle modern convolutional neural networks (CNNs) efficiently, a hardware architecture of CNN inference accelerator is proposed to handle depthwise convolutions and regular convolutions, which are both essential building blocks for embedded-computer-vision algorithms. Different from related works, the proposed architecture can support filter kernels with different sizes with high flexibility since it does not require extra costs for intra-kernel parallelism, and it can generate convolution results faster than the architecture of the related works. The experimental results show the importance of supporting depthwise convolutions and dilated convolutions with the proposed hardware architecture. In addition to depthwise convolutions with large-kernels, a new structure called DDC layer, which includes the combination of depthwise convolutions and dilated convolutions, is also analyzed in this paper. For face detection, the computational costs decrease by 30%, and the model size decreases by 20% when the DDC layers are applied to the network. For image classification, the accuracy is increased by 1% by simply replacing $3 times 3$ filters with $5 times 5$ filters in depthwise convolutions.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا