ترغب بنشر مسار تعليمي؟ اضغط هنا

Future IoT networks consist of heterogeneous types of IoT devices (with various communication types and energy constraints) which are assumed to belong to an IoT service provider (ISP). To power backscattering-based and wireless-powered devices, the ISP has to contract with an energy service provider (ESP). This article studies the strategic interactions between the ISP and its ESP and their implications on the joint optimal time scheduling and energy trading for heterogeneous devices. To that end, we propose an economic framework using the Stackelberg game to maximize the network throughput and energy efficiency of both the ISP and ESP. Specifically, the ISP leads the game by sending its optimal service time and energy price request (that maximizes its profit) to the ESP. The ESP then optimizes and supplies the transmission power which satisfies the ISPs request (while maximizing ESPs utility). To obtain the Stackelberg equilibrium (SE), we apply a backward induction technique which first derives a closed-form solution for the ESP. Then, to tackle the non-convex optimization problem for the ISP, we leverage the block coordinate descent and convex-concave procedure techniques to design two partitioning schemes (i.e., partial adjustment (PA) and joint adjustment (JA)) to find the optimal energy price and service time that constitute local SEs. Numerical results reveal that by jointly optimizing the energy trading and the time allocation for heterogeneous IoT devices, one can achieve significant improvements in terms of the ISPs profit compared with those of conventional transmission methods. Different tradeoffs between the ESPs and ISPs profits and complexities of the PA/JA schemes can also be numerically tuned. Simulations also show that the obtained local SEs approach the socially optimal welfare when the ISPs benefit per transmitted bit is higher than a given threshold.
Out-of-distribution (OoD) detection is a natural downstream task for deep generative models, due to their ability to learn the input probability distribution. There are mainly two classes of approaches for OoD detection using deep generative models, viz., based on likelihood measure and the reconstruction loss. However, both approaches are unable to carry out OoD detection effectively, especially when the OoD samples have smaller variance than the training samples. For instance, both flow based and VAE models assign higher likelihood to images from SVHN when trained on CIFAR-10 images. We use a recently proposed generative model known as neural rendering model (NRM) and derive metrics for OoD. We show that NRM unifies both approaches since it provides a likelihood estimate and also carries out reconstruction in each layer of the neural network. Among various measures, we found the joint likelihood of latent variables to be the most effective one for OoD detection. Our results show that when trained on CIFAR-10, lower likelihood (of latent variables) is assigned to SVHN images. Additionally, we show that this metric is consistent across other OoD datasets. To the best of our knowledge, this is the first work to show consistently lower likelihood for OoD data with smaller variance with deep generative models.
We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly capture variations in data due to latent task nuisance variables. We demonstrate that max-su m inference in the DRMM yields an algorithm that exactly reproduces the operations in deep convolutional neural networks (DCNs), providing a first principles derivation. Our framework provides new insights into the successes and shortcomings of DCNs as well as a principled route to their improvement. DRMM training via the Expectation-Maximization (EM) algorithm is a powerful alternative to DCN back-propagation, and initial training results are promising. Classification based on the DRMM and other variants outperforms DCNs in supervised digit classification, training 2-3x faster while achieving similar accuracy. Moreover, the DRMM is applicable to semi-supervised and unsupervised learning tasks, achieving results that are state-of-the-art in several categories on the MNIST benchmark and comparable to state of the art on the CIFAR10 benchmark.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا