ترغب بنشر مسار تعليمي؟ اضغط هنا

The advances in deep neural networks (DNN) have significantly enhanced real-time detection of anomalous data in IoT applications. However, the complexity-accuracy-delay dilemma persists: complex DNN models offer higher accuracy, but typical IoT devic es can barely afford the computation load, and the remedy of offloading the load to the cloud incurs long delay. In this paper, we address this challenge by proposing an adaptive anomaly detection scheme with hierarchical edge computing (HEC). Specifically, we first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer. Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network. We also incorporate a parallelism policy training method to accelerate the training process by taking advantage of distributed models. We build an HEC testbed using real IoT devices, implement and evaluate our contextual-bandit approach with both univariate and multivariate IoT datasets. In comparison with both baseline and state-of-the-art schemes, our adaptive approach strikes the best accuracy-delay tradeoff on the univariate dataset, and achieves the best accuracy and F1-score on the multivariate dataset with only negligibly longer delay than the best (but inflexible) scheme.
In this paper, we consider an uplink heterogeneous cloud radio access network (H-CRAN), where a macro base station (BS) coexists with many remote radio heads (RRHs). For cost-savings, only the BS is connected to the baseband unit (BBU) pool via fiber links. The RRHs, however, are associated with the BBU pool through wireless fronthaul links, which share the spectrum resource with radio access networks. Due to the limited capacity of fronthaul, the compress-and-forward scheme is employed, such as point-to-point compression or Wyner-Ziv coding. Different decoding strategies are also considered. This work aims to maximize the uplink ergodic sum-rate (SR) by jointly optimizing quantization noise matrix and bandwidth allocation between radio access networks and fronthaul links, which is a mixed time-scale issue. To reduce computational complexity and communication overhead, we introduce an approximation problem of the joint optimization problem based on large-dimensional random matrix theory, which is a slow time-scale issue because it only depends on statistical channel information. Finally, an algorithm based on Dinkelbachs algorithm is proposed to find the optimal solution to the approximate problem. In summary, this work provides an economic solution to the challenge of constrained fronthaul capacity, and also provides a framework with less computational complexity to study how bandwidth allocation and fronthaul compression can affect the SR maximization problem.
The quest for biologically plausible deep learning is driven, not just by the desire to explain experimentally-observed properties of biological neural networks, but also by the hope of discovering more efficient methods for training artificial netwo rks. In this paper, we propose a new algorithm named Variational Probably Flow (VPF), an extension of minimum probability flow for training binary Deep Boltzmann Machines (DBMs). We show that weight updates in VPF are local, depending only on the states and firing rates of the adjacent neurons. Unlike contrastive divergence, there is no need for Gibbs confabulations; and unlike backpropagation, alternating feedforward and feedback phases are not required. Moreover, the learning algorithm is effective for training DBMs with intra-layer connections between the hidden nodes. Experiments with MNIST and Fashion MNIST demonstrate that VPF learns reasonable features quickly, reconstructs corrupted images more accurately, and generates samples with a high estimated log-likelihood. Lastly, we note that, interestingly, if an asymmetric version of VPF exists, the weight updates directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).
We consider the problem of estimating local sensor parameters, where the local parameters and sensor observations are related through linear stochastic models. Sensors exchange messages and cooperate with each other to estimate their own local parame ters iteratively. We study the Gaussian Sum-Product Algorithm over a Wireless Network (gSPAWN) procedure, which is based on belief propagation, but uses fixed size broadcast messages at each sensor instead. Compared with the popular diffusion strategies for performing network parameter estimation, whose communication cost at each sensor increases with increasing network density, the gSPAWN algorithm allows sensors to broadcast a message whose size does not depend on the network size or density, making it more suitable for applications in wireless sensor networks. We show that the gSPAWN algorithm converges in mean and has mean-square stability under some technical sufficient conditions, and we describe an application of the gSPAWN algorithm to a network localization problem in non-line-of-sight environments. Numerical results suggest that gSPAWN converges much faster in general than the diffusion method, and has lower communication costs, with comparable root mean square errors.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا