ترغب بنشر مسار تعليمي؟ اضغط هنا

The Internet of Things as a Deep Neural Network

140   0   0.0 ( 0 )
 نشر من قبل Rong Du
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

An important task in the Internet of Things (IoT) is field monitoring, where multiple IoT nodes take measurements and communicate them to the base station or the cloud for processing, inference, and analysis. This communication becomes costly when the measurements are high-dimensional (e.g., videos or time-series data). The IoT networks with limited bandwidth and low power devices may not be able to support such frequent transmissions with high data rates. To ensure communication efficiency, this article proposes to model the measurement compression at IoT nodes and the inference at the base station or cloud as a deep neural network (DNN). We propose a new framework where the data to be transmitted from nodes are the intermediate outputs of a layer of the DNN. We show how to learn the model parameters of the DNN and study the trade-off between the communication rate and the inference accuracy. The experimental results show that we can save approximately 96% transmissions with only a degradation of 2.5% in inference accuracy. Our findings have the potentiality to enable many new IoT data analysis applications generating large amount of measurements.

قيم البحث

اقرأ أيضاً

Industrial Internet of Things (IIoT) revolutionizes the future manufacturing facilities by integrating the Internet of Things technologies into industrial settings. With the deployment of massive IIoT devices, it is difficult for the wireless network to support the ubiquitous connections with diverse quality-of-service (QoS) requirements. Although machine learning is regarded as a powerful data-driven tool to optimize wireless network, how to apply machine learning to deal with the massive IIoT problems with unique characteristics remains unsolved. In this paper, we first summarize the QoS requirements of the typical massive non-critical and critical IIoT use cases. We then identify unique characteristics in the massive IIoT scenario, and the corresponding machine learning solutions with its limitations and potential research directions. We further present the existing machine learning solutions for individual layer and cross-layer problems in massive IIoT. Last but not the least, we present a case study of massive access problem based on deep neural network and deep reinforcement learning techniques, respectively, to validate the effectiveness of machine learning in massive IIoT scenario.
108 - Abdullah Khanfor 2020
In this paper, we propose a machine learning process for clustering large-scale social Internet-of-things (SIoT) devices into several groups of related devices sharing strong relations. To this end, we generate undirected weighted graphs based on the historical dataset of IoT devices and their social relations. Using the adjacency matrices of these graphs and the IoT devices features, we embed the graphs nodes using a Graph Neural Network (GNN) to obtain numerical vector representations of the IoT devices. The vector representation does not only reflect the characteristics of the device but also its relations with its peers. The obtained node embeddings are then fed to a conventional unsupervised learning algorithm to determine the clusters accordingly. We showcase the obtained IoT groups using two well-known clustering algorithms, specifically the K-means and the density-based algorithm for discovering clusters (DBSCAN). Finally, we compare the performances of the proposed GNN-based clustering approach in terms of coverage and modularity to those of the deterministic Louvain community detection algorithm applied solely on the graphs created from the different relations. It is shown that the framework achieves promising preliminary results in clustering large-scale IoT systems.
146 - Hiroshi Watanabe 2018
In the Internet-of-Things, the number of connected devices is expected to be extremely huge, i.e., more than a couple of ten billion. It is however well-known that the security for the Internet-of-Things is still open problem. In particular, it is di fficult to certify the identification of connected devices and to prevent the illegal spoofing. It is because the conventional security technologies have advanced for mainly protecting logical network and not for physical network like the Internet-of-Things. In order to protect the Internet-of-Things with advanced security technologies, we propose a new concept (datachain layer) which is a well-designed combination of physical chip identification and blockchain. With a proposed solution of the physical chip identification, the physical addresses of connected devices are uniquely connected to the logical addresses to be protected by blockchain.
Deep Neural Network (DNN) has gained unprecedented performance due to its automated feature extraction capability. This high order performance leads to significant incorporation of DNN models in different Internet of Things (IoT) applications in the past decade. However, the colossal requirement of computation, energy, and storage of DNN models make their deployment prohibitive on resource constraint IoT devices. Therefore, several compression techniques were proposed in recent years for reducing the storage and computation requirements of the DNN model. These techniques on DNN compression have utilized a different perspective for compressing DNN with minimal accuracy compromise. It encourages us to make a comprehensive overview of the DNN compression techniques. In this paper, we present a comprehensive review of existing literature on compressing DNN model that reduces both storage and computation requirements. We divide the existing approaches into five broad categories, i.e., network pruning, sparse representation, bits precision, knowledge distillation, and miscellaneous, based upon the mechanism incorporated for compressing the DNN model. The paper also discussed the challenges associated with each category of DNN compression techniques. Finally, we provide a quick summary of existing work under each category with the future direction in DNN compression.
83 - Wenrui Lin , Xijun Wang , Chao Xu 2020
The freshness of status updates is imperative in mission-critical Internet of things (IoT) applications. Recently, Age of Information (AoI) has been proposed to measure the freshness of updates at the receiver. However, AoI only characterizes the fre shness over time, but ignores the freshness in the content. In this paper, we introduce a new performance metric, Age of Changed Information (AoCI), which captures both the passage of time and the change of information content. Also, we examine the AoCI in a time-slotted status update system, where a sensor samples the physical process and transmits the update packets with a cost. We formulate a Markov Decision Process (MDP) to find the optimal updating policy that minimizes the weighted sum of the AoCI and the update cost. Particularly, in a special case that the physical process is modeled by a two-state discrete time Markov chain with equal transition probability, we show that the optimal policy is of threshold type with respect to the AoCI and derive the closed-form of the threshold. Finally, simulations are conducted to exhibit the performance of the threshold policy and its superiority over the zero-wait baseline policy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا