ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning and the Internet of Things Enable Steam Flood Optimization for Improved Oil Production

60   0   0.0 ( 0 )
 نشر من قبل Mi Yan
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently developed machine learning techniques, in association with the Internet of Things (IoT) allow for the implementation of a method of increasing oil production from heavy-oil wells. Steam flood injection, a widely used enhanced oil recovery technique, uses thermal and gravitational potential to mobilize and dilute heavy oil in situ to increase oil production. In contrast to traditional steam flood simulations based on principles of classic physics, we introduce here an approach using cutting-edge machine learning techniques that have the potential to provide a better way to describe the performance of steam flood. We propose a workflow to address a category of time-series data that can be analyzed with supervised machine learning algorithms and IoT. We demonstrate the effectiveness of the technique for forecasting oil production in steam flood scenarios. Moreover, we build an optimization system that recommends an optimal steam allocation plan, and show that it leads to a 3% improvement in oil production. We develop a minimum viable product on a cloud platform that can implement real-time data collection, transfer, and storage, as well as the training and implementation of a cloud-based machine learning model. This workflow also offers an applicable solution to other problems with similar time-series data structures, like predictive maintenance.

قيم البحث

اقرأ أيضاً

Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices. However, their emph{comparative training performanc e} under real-world resource-restricted Internet of Things (IoT) device settings, e.g., Raspberry Pi, remains barely studied, which, to our knowledge, have not yet been evaluated and compared, rendering inconvenient reference for practitioners. This work firstly provides empirical comparisons of FL and SL in real-world IoT settings regarding (i) learning performance with heterogeneous data distributions and (ii) on-device execution overhead. Our analyses in this work demonstrate that the learning performance of SL is better than FL under an imbalanced data distribution but worse than FL under an extreme non-IID data distribution. Recently, FL and SL are combined to form splitfed learning (SFL) to leverage each of their benefits (e.g., parallel training of FL and lightweight on-device computation requirement of SL). This work then considers FL, SL, and SFL, and mount them on Raspberry Pi devices to evaluate their performance, including training time, communication overhead, power consumption, and memory usage. Besides evaluations, we apply two optimizations. Firstly, we generalize SFL by carefully examining the possibility of a hybrid type of model training at the server-side. The generalized SFL merges sequential (dependent) and parallel (independent) processes of model training and is thus beneficial for a system with large-scaled IoT devices, specifically at the server-side operations. Secondly, we propose pragmatic techniques to substantially reduce the communication overhead by up to four times for the SL and (generalized) SFL.
Industrial Internet of Things (IIoT) revolutionizes the future manufacturing facilities by integrating the Internet of Things technologies into industrial settings. With the deployment of massive IIoT devices, it is difficult for the wireless network to support the ubiquitous connections with diverse quality-of-service (QoS) requirements. Although machine learning is regarded as a powerful data-driven tool to optimize wireless network, how to apply machine learning to deal with the massive IIoT problems with unique characteristics remains unsolved. In this paper, we first summarize the QoS requirements of the typical massive non-critical and critical IIoT use cases. We then identify unique characteristics in the massive IIoT scenario, and the corresponding machine learning solutions with its limitations and potential research directions. We further present the existing machine learning solutions for individual layer and cross-layer problems in massive IIoT. Last but not the least, we present a case study of massive access problem based on deep neural network and deep reinforcement learning techniques, respectively, to validate the effectiveness of machine learning in massive IIoT scenario.
The Internet of Things (IoT) is becoming an indispensable part of everyday life, enabling a variety of emerging services and applications. However, the presence of rogue IoT devices has exposed the IoT to untold risks with severe consequences. The fi rst step in securing the IoT is detecting rogue IoT devices and identifying legitimate ones. Conventional approaches use cryptographic mechanisms to authenticate and verify legitimate devices identities. However, cryptographic protocols are not available in many systems. Meanwhile, these methods are less effective when legitimate devices can be exploited or encryption keys are disclosed. Therefore, non-cryptographic IoT device identification and rogue device detection become efficient solutions to secure existing systems and will provide additional protection to systems with cryptographic protocols. Non-cryptographic approaches require more effort and are not yet adequately investigated. In this paper, we provide a comprehensive survey on machine learning technologies for the identification of IoT devices along with the detection of compromised or falsified ones from the viewpoint of passive surveillance agents or network operators. We classify the IoT device identification and detection into four categories: device-specific pattern recognition, Deep Learning enabled device identification, unsupervised device identification, and abnormal device detection. Meanwhile, we discuss various ML-related enabling technologies for this purpose. These enabling technologies include learning algorithms, feature engineering on network traffic traces and wireless signals, continual learning, and abnormality detection.
The recent history has witnessed disruptive advances in disciplines related to information and communication technologies that have laid a rich technological ecosystem for the growth and maturity of latent paradigms in this domain. Among them, sensor networks have evolved from the originally conceived set-up where hundreds of nodes with sensing and actuating functionalities were deployed to capture information from their environment and act accordingly (coining the so-called wireless sensor network concept) to the provision of such functionalities embedded in quotidian objects that communicate and work together to collaboratively accomplish complex tasks based on the information they acquire by sensing the environment. This is nowadays a reality, embracing the original idea of an Internet of things (IoT) forged in the late twentieth century, yet featuring unprecedented scales, capabilities and applications ignited by new radio interfaces, communication protocols and intelligent data-based models. This chapter examines the latest findings reported in the literature around these topics, with a clear focus on IoT communications, protocols and platforms, towards ultimately identifying opportunities and trends that will be at the forefront of IoT-related research in the near future.
The use of machine learning to guide clinical decision making has the potential to worsen existing health disparities. Several recent works frame the problem as that of algorithmic fairness, a framework that has attracted considerable attention and c riticism. However, the appropriateness of this framework is unclear due to both ethical as well as technical considerations, the latter of which include trade-offs between measures of fairness and model performance that are not well-understood for predictive models of clinical outcomes. To inform the ongoing debate, we conduct an empirical study to characterize the impact of penalizing group fairness violations on an array of measures of model performance and group fairness. We repeat the analyses across multiple observational healthcare databases, clinical outcomes, and sensitive attributes. We find that procedures that penalize differences between the distributions of predictions across groups induce nearly-universal degradation of multiple performance metrics within groups. On examining the secondary impact of these procedures, we observe heterogeneity of the effect of these procedures on measures of fairness in calibration and ranking across experimental conditions. Beyond the reported trade-offs, we emphasize that analyses of algorithmic fairness in healthcare lack the contextual grounding and causal awareness necessary to reason about the mechanisms that lead to health disparities, as well as about the potential of algorithmic fairness methods to counteract those mechanisms. In light of these limitations, we encourage researchers building predictive models for clinical use to step outside the algorithmic fairness frame and engage critically with the broader sociotechnical context surrounding the use of machine learning in healthcare.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا