ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Learning Inversion of Electrical Resistivity Data

144   0   0.0 ( 0 )
 نشر من قبل Peng Jiang Dr.
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The inverse problem of electrical resistivity surveys (ERSs) is difficult because of its nonlinear and ill-posed nature. For this task, traditional linear inversion methods still face challenges such as suboptimal approximation and initial model selection. Inspired by the remarkable nonlinear mapping ability of deep learning approaches, in this article, we propose to build the mapping from apparent resistivity data (input) to resistivity model (output) directly by convolutional neural networks (CNNs). However, the vertically varying characteristic of patterns in the apparent resistivity data may cause ambiguity when using CNNs with the weight sharing and effective receptive field properties. To address the potential issue, we supply an additional tier feature map to CNNs to help those aware of the relationship between input and output. Based on the prevalent U-Net architecture, we design our network (ERSInvNet) that can be trained end-to-end and can reach a very fast inference speed during testing. We further introduce a depth weighting function and a smooth constraint into loss function to improve inversion accuracy for the deep region and suppress false anomalies. Six groups of experiments are considered to demonstrate the feasibility and efficiency of the proposed methods. According to the comprehensive qualitative analysis and quantitative comparison, ERSInvNet with tier feature map, smooth constraints, and depth weighting function together achieve the best performance.



قيم البحث

اقرأ أيضاً

61 - Shucai Li , Bin Liu , Yuxiao Ren 2019
We propose a new method to tackle the mapping challenge from time-series data to spatial image in the field of seismic exploration, i.e., reconstructing the velocity model directly from seismic data by deep neural networks (DNNs). The conventional wa y of addressing this ill-posed inversion problem is through iterative algorithms, which suffer from poor nonlinear mapping and strong nonuniqueness. Other attempts may either import human intervention errors or underuse seismic data. The challenge for DNNs mainly lies in the weak spatial correspondence, the uncertain reflection-reception relationship between seismic data and velocity model, as well as the time-varying property of seismic data. To tackle these challenges, we propose end-to-end seismic inversion networks (SeisInvNets) with novel components to make the best use of all seismic data. Specifically, we start with every seismic trace and enhance it with its neighborhood information, its observation setup, and the global context of its corresponding seismic profile. From the enhanced seismic traces, the spatially aligned feature maps can be learned and further concatenated to reconstruct a velocity model. In general, we let every seismic trace contribute to the reconstruction of the whole velocity model by finding spatial correspondence. The proposed SeisInvNet consistently produces improvements over the baselines and achieves promising performance on our synthesized and proposed SeisInv data set according to various evaluation metrics. The inversion results are more consistent with the target from the aspects of velocity values, subsurface structures, and geological interfaces. Moreover, the mechanism and the generalization of the proposed method are discussed and verified. Nevertheless, the generalization of deep-learning-based inversion methods on real data is still challenging and considering physics may be one potential solution.
Magnetic resonance-electrical properties tomography (MR-EPT) is a technique used to estimate the conductivity and permittivity of tissues from MR measurements of the transmit magnetic field. Different reconstruction methods are available, however all these methods present several limitations which hamper the clinical applicability. Standard Helmholtz based MR-EPT methods are severely affected by noise. Iterative reconstruction methods such as contrast source inversion-EPT (CSI-EPT) are typically time consuming and are dependent on their initialization. Deep learning (DL) based methods require a large amount of training data before sufficient generalization can be achieved. Here, we investigate the benefits achievable using a hybrid approach, i.e. using MR-EPT or DL-EPT as initialization guesses for standard 3D CSI-EPT. Using realistic electromagnetic simulations at 3 T and 7 T, the accuracy and precision of hybrid CSI reconstructions are compared to standard 3D CSI-EPT reconstructions. Our results indicate that a hybrid method consisting of an initial DL-EPT reconstruction followed by a 3D CSI-EPT reconstruction would be beneficial. DL-EPT combined with standard 3D CSI-EPT exploits the power of data driven DL-based EPT reconstructions while the subsequent CSI-EPT facilitates a better generalization by providing data consistency.
65 - Bing Liu 2020
Ramp metering that uses traffic signals to regulate vehicle flows from the on-ramps has been widely implemented to improve vehicle mobility of the freeway. Previous studies generally update signal timings in real-time based on predefined traffic meas ures collected by point detectors, such as traffic volumes and occupancies. Comparing with point detectors, traffic cameras-which have been increasingly deployed on road networks-could cover larger areas and provide more detailed traffic information. In this work, we propose a deep reinforcement learning (DRL) method to explore the potential of traffic video data in improving the efficiency of ramp metering. The proposed method uses traffic video frames as inputs and learns the optimal control strategies directly from the high-dimensional visual inputs. A real-world case study demonstrates that, in comparison with a state-of-the-practice method, the proposed DRL method results in 1) lower travel times in the mainline, 2) shorter vehicle queues at the on-ramp, and 3) higher traffic flows downstream of the merging area. The results suggest that the proposed method is able to extract useful information from the video data for better ramp metering controls.
Moving loads such as cars and trains are very useful sources of seismic waves, which can be analyzed to retrieve information on the seismic velocity of subsurface materials using the techniques of ambient noise seismology. This information is valuabl e for a variety of applications such as geotechnical characterization of the near-surface, seismic hazard evaluation, and groundwater monitoring. However, for such processes to converge quickly, data segments with appropriate noise energy should be selected. Distributed Acoustic Sensing (DAS) is a novel sensing technique that enables acquisition of these data at very high spatial and temporal resolution for tens of kilometers. One major challenge when utilizing the DAS technology is the large volume of data that is produced, thereby presenting a significant Big Data challenge to find regions of useful energy. In this work, we present a highly scalable and efficient approach to process real, complex DAS data by integrating physics knowledge acquired during a data exploration phase followed by deep supervised learning to identify useful coherent surface waves generated by anthropogenic activity, a class of seismic waves that is abundant on these recordings and is useful for geophysical imaging. Data exploration and training were done on 130~Gigabytes (GB) of DAS measurements. Using parallel computing, we were able to do inference on an additional 170~GB of data (or the equivalent of 10 days worth of recordings) in less than 30 minutes. Our method provides interpretable patterns describing the interaction of ground-based human activities with the buried sensors.
High-quality labeled datasets play a crucial role in fueling the development of machine learning (ML), and in particular the development of deep learning (DL). However, since the emergence of the ImageNet dataset and the AlexNet model in 2012, the si ze of new open-source labeled vision datasets has remained roughly constant. Consequently, only a minority of publications in the computer vision community tackle supervised learning on datasets that are orders of magnitude larger than Imagenet. In this paper, we survey computer vision research domains that study the effects of such large datasets on model performance across different vision tasks. We summarize the communitys current understanding of those effects, and highlight some open questions related to training with massive datasets. In particular, we tackle: (a) The largest datasets currently used in computer vision research and the interesting takeaways from training on such datasets; (b) The effectiveness of pre-training on large datasets; (c) Recent advancements and hurdles facing synthetic datasets; (d) An overview of double descent and sample non-monotonicity phenomena; and finally, (e) A brief discussion of lifelong/continual learning and how it fares compared to learning from huge labeled datasets in an offline setting. Overall, our findings are that research on optimization for deep learning focuses on perfecting the training routine and thus making DL models less data hungry, while research on synthetic datasets aims to offset the cost of data labeling. However, for the time being, acquiring non-synthetic labeled data remains indispensable to boost performance.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا