ترغب بنشر مسار تعليمي؟ اضغط هنا

Age-Optimal Power Allocation in Industrial IoT: A Risk-Sensitive Federated Learning Approach

346   0   0.0 ( 0 )
 نشر من قبل Chen-Feng Liu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work studies a real-time environment monitoring scenario in the industrial Internet of things, where wireless sensors proactively collect environmental data and transmit it to the controller. We adopt the notion of risk-sensitivity in financial mathematics as the objective to jointly minimize the mean, variance, and other higher-order statistics of the network energy consumption subject to the constraints on the age of information (AoI) threshold violation probability and the AoI exceedances over a pre-defined threshold. We characterize the extreme AoI staleness using results in extreme value theory and propose a distributed power allocation approach by weaving in together principles of Lyapunov optimization and federated learning (FL). Simulation results demonstrate that the proposed FL-based distributed solution is on par with the centralized baseline while consuming 28.50% less system energy and outperforms the other baselines.



قيم البحث

اقرأ أيضاً

While information delivery in industrial Internet of things demands reliability and latency guarantees, the freshness of the controllers available information, measured by the age of information (AoI), is paramount for high-performing industrial auto mation. The problem in this work is cast as a sensors transmit power minimization subject to the peak-AoI requirement and a probabilistic constraint on queuing latency. We further characterize the tail behavior of the latency by a generalized Pareto distribution (GPD) for solving the power allocation problem through Lyapunov optimization. As each sensor utilizes its own data to locally train the GPD model, we incorporate federated learning and propose a local-model selection approach which accounts for correlation among the sensors training data. Numerical results show the tradeoff between the transmit power, peak AoI, and delays tail distribution. Furthermore, we verify the superiority of the proposed correlation-aware approach for selecting the local models in federated learning over an existing baseline.
Industrial Internet of Things (IIoT) lays a new paradigm for the concept of Industry 4.0 and paves an insight for new industrial era. Nowadays smart machines and smart factories use machine learning/deep learning based models for incurring intelligen ce. However, storing and communicating the data to the cloud and end device leads to issues in preserving privacy. In order to address this issue, federated learning (FL) technology is implemented in IIoT by the researchers nowadays to provide safe, accurate, robust and unbiased models. Integrating FL in IIoT ensures that no local sensitive data is exchanged, as the distribution of learning models over the edge devices has become more common with FL. Therefore, only the encrypted notifications and parameters are communicated to the central server. In this paper, we provide a thorough overview on integrating FL with IIoT in terms of privacy, resource and data management. The survey starts by articulating IIoT characteristics and fundamentals of distributive and FL. The motivation behind integrating IIoT and FL for achieving data privacy preservation and on-device learning are summarized. Then we discuss the potential of using machine learning, deep learning and blockchain techniques for FL in secure IIoT. Further we analyze and summarize the ways to handle the heterogeneous and huge data. Comprehensive background on data and resource management are then presented, followed by applications of IIoT with FL in healthcare and automobile industry. Finally, we shed light on challenges, some possible solutions and potential directions for future research.
Device failure detection is one of most essential problems in industrial internet of things (IIoT). However, in conventional IIoT device failure detection, client devices need to upload raw data to the central server for model training, which might l ead to disclosure of sensitive business data. Therefore, in this paper, to ensure client data privacy, we propose a blockchain-based federated learning approach for device failure detection in IIoT. First, we present a platform architecture of blockchain-based federated learning systems for failure detection in IIoT, which enables verifiable integrity of client data. In the architecture, each client periodically creates a Merkle tree in which each leaf node represents a client data record, and stores the tree root on a blockchain. Further, to address the data heterogeneity issue in IIoT failure detection, we propose a novel centroid distance weighted federated averaging (CDW_FedAvg) algorithm taking into account the distance between positive class and negative class of each client dataset. In addition, to motivate clients to participate in federated learning, a smart contact based incentive mechanism is designed depending on the size and the centroid distance of client data used in local model training. A prototype of the proposed architecture is implemented with our industry partner, and evaluated in terms of feasibility, accuracy and performance. The results show that the approach is feasible, and has satisfactory accuracy and performance.
Since edge device failures (i.e., anomalies) seriously affect the production of industrial products in Industrial IoT (IIoT), accurately and timely detecting anomalies is becoming increasingly important. Furthermore, data collected by the edge device may contain the users private data, which is challenging the current detection approaches as user privacy is calling for the public concern in recent years. With this focus, this paper proposes a new communication-efficient on-device federated learning (FL)-based deep anomaly detection framework for sensing time-series data in IIoT. Specifically, we first introduce a FL framework to enable decentralized edge devices to collaboratively train an anomaly detection model, which can improve its generalization ability. Second, we propose an Attention Mechanism-based Convolutional Neural Network-Long Short Term Memory (AMCNN-LSTM) model to accurately detect anomalies. The AMCNN-LSTM model uses attention mechanism-based CNN units to capture important fine-grained features, thereby preventing memory loss and gradient dispersion problems. Furthermore, this model retains the advantages of LSTM unit in predicting time series data. Third, to adapt the proposed framework to the timeliness of industrial anomaly detection, we propose a gradient compression mechanism based on Top-textit{k} selection to improve communication efficiency. Extensive experiment studies on four real-world datasets demonstrate that the proposed framework can accurately and timely detect anomalies and also reduce the communication overhead by 50% compared to the federated learning framework that does not use a gradient compression scheme.
In the last few years there has been significant growth in the area of wireless communication. IEEE 802.16/WiMAX is the network which is designed for providing high speed wide area broadband wireless access; WiMAX is an emerging wireless technology f or creating multi-hop Mesh network. Future generation networks will be characterized by variable and high data rates, Quality of Services (QoS), seamless mobility both within a network and between networks of different technologies and service providers. A technology is developed to accomplish these necessities is regular by IEEE, is 802.16, also called as WiMAX (Worldwide Interoperability for Microwave Access). This architecture aims to apply Long range connectivity, High data rates, High security, Low power utilization and Excellent Quality of Services and squat deployment costs to a wireless access technology on a metropolitan level. In this paper we have observed the performance analysis of location based resource allocation for WiMAX and WLAN-WiMAX client and in second phase we observed the rate-adaptive algorithms. We know that base station (BS) is observed the ranging first for all subscribers then established the link between them and in final phase they will allocate the resource with Subcarriers allocation according to the demand (UL) i.e. video, voice and data application. We propose linear approach, Active-Set optimization and Genetic Algorithm for Resource Allocation in downlink Mobile WiMAX networks. Purpose of proposed algorithms is to optimize total throughput. Simulation results show that Genetic Algorithm and Active-Set algorithm performs better than previous methods in terms of higher capacities but GA have high complexity then active set.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا