ترغب بنشر مسار تعليمي؟ اضغط هنا

Toward Creating Subsurface Camera

55   0   0.0 ( 0 )
 نشر من قبل Maria Valero
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

In this article, the framework and architecture of Subsurface Camera (SAMERA) is envisioned and described for the first time. A SAMERA is a geophysical sensor network that senses and processes geophysical sensor signals, and computes a 3D subsurface image in-situ in real-time. The basic mechanism is: geophysical waves propagating/reflected/refracted through subsurface enter a network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control software may be connected to this network to allow view of the 2D/3D image and adjustment of settings such as resolution, filter, regularization and other algorithm parameters. System prototypes based on seismic imaging have been designed. SAMERA technology is envisioned as a game changer to transform many subsurface survey and monitoring applications, including oil/gas exploration and production, subsurface infrastructures and homeland security, wastewater and CO2 sequestration, earthquake and volcano hazard monitoring. The system prototypes for seismic imaging have been built. Creating SAMERA requires an interdisciplinary collaboration and transformation of sensor networks, signal processing, distributed computing, and geophysical imaging.

قيم البحث

اقرأ أيضاً

Sensitivity analysis plays an important role in searching for constitutive parameters (e.g. permeability) subsurface flow simulations. The mathematics behind is to solve a dynamic constrained optimization problem. Traditional methods like finite diff erence and forward sensitivity analysis require computational cost that increases linearly with the number of parameters times number of cost functions. Discrete adjoint sensitivity analysis (SA) is gaining popularity due to its computational efficiency. This algorithm requires a forward run followed by a backward run who involves integrating adjoint equation backward in time. This was done by doing one forward solve and store the snapshot by checkpointing. Using the checkpoint data, the adjoint equation is numerically integrated. The computational cost of this algorithm only depends on the number of cost functions and does not depend on the number of parameters. The algorithm is highly powerful when the parameter space is large, and in our case of heterogeneous permeability the number of parameters is proportional to the number of grid cells. The aim of this project is to implement the discrete sensitivity analysis method in parallel to solve realistic subsurface problems. To achieve this goal, we propose to implement the algorithm in parallel using data structures such as TSAdjoint and TAO. This paper dealt with large-scale subsurface flow inversion problem with discrete adjoint method. This method can effectively reduce the computational cost in sensitivity analysis.
Many major oceanographic internal wave observational programs of the last 4 decades are reanalyzed in order to characterize variability of the deep ocean internal wavefield. The observations are discussed in the context of the universal spectral mode l proposed by Garrett and Munk. The Garrett and Munk model is a good description of wintertime conditions at Site-D on the continental rise north of the Gulf Stream. Elsewhere and at other times, significant deviations in terms of amplitude, separability of the 2-D vertical wavenumber - frequency spectrum, and departure from the models functional form are noted. Subtle geographic patterns are apparent in deviations from the high frequency and high vertical wavenumber power laws of the Garrett and Munk spectrum. Moreover, such deviations tend to co-vary: whiter frequency spectra are partnered with redder vertical wavenumber spectra. Attempts are made to interpret the variability in terms of the interplay between generation, propagation and nonlinearity using a statistical radiative balance equation. This process frames major questions for future research with the insight that such integrative studies could constrain both observationally and theoretically based interpretations.
The quantitative analyses of karst spring discharge typically rely on physical-based models, which are inherently uncertain. To improve the understanding of the mechanism of spring discharge fluctuation and the relationship between precipitation and spring discharge, three machine learning methods were developed to reduce the predictive errors of physical-based groundwater models, simulate the discharge of Longzici Springs karst area, and predict changes in the spring on the basis of long time series precipitation monitoring and spring water flow data from 1987 to 2018. The three machine learning methods included two artificial neural networks (ANNs), namely, multilayer perceptron (MLP) and long short-term memory-recurrent neural network (LSTM-RNN), and support vector regression (SVR). A normalization method was introduced for data preprocessing to make the three methods robust and computationally efficient. To compare and evaluate the capability of the three machine learning methods, the mean squared error (MSE), mean absolute error (MAE), and root-mean-square error (RMSE) were selected as the performance metrics for these methods. Simulations showed that MLP reduced MSE, MAE, and RMSE to 0.0010, 0.0254, and 0.0318, respectively. Meanwhile, LSTM-RNN reduced MSE to 0.0010, MAE to 0.0272, and RMSE to 0.0329. Moreover, the decrease in MSE, MAE, and RMSE were 0.0910, 0.1852, and 0.3017, respectively, for SVR. Results indicated that MLP performed slightly better than LSTM-RNN, and MLP and LSTM-RNN performed considerably better than SVR. Furthermore, ANNs were demonstrated to be prior machine learning methods for simulating and predicting karst spring discharge.
The mechanical properties of single fibres are highly important in the paper production process to produce and adjust properties for the favoured fields of application. The description of mechanical properties is usually characterised via linearized assumptions and is not resolved locally or spatially in three dimensions. In tensile tests or nanoindentation experiments on cellulosic fibres, only one mechanical parameter, such as elastic modulus or hardness, is usually obtained. To obtain a more detailed mechanical picture of the fibre, it is crucial to determine mechanical properties in depth. To this end, we discuss an atomic force microscopy-based approach to examine the local stiffness as a function of indentation depth via static force-distance curves. This method has been applied to linter fibres (extracted from a finished paper sheet) as well as to natural raw cotton fibres to better understand the influence of the pulp treatment process in paper production on the mechanical properties. Both types of fibres were characterised in dry and wet conditions with respect to alterations in their mechanical properties. Subsurface imaging revealed which wall in the fibre structure protects the fibre against mechanical loading. Via a combined 3D display, a spatially resolved mechanical map of the fibre interior near the surface can be established. Additionally, we labelled fibres with carbohydrate binding modules tagged with fluorescent proteins to compare the AFM results with fluorescence confocal laser scanning microscopy imaging. Nanomechanical subsurface imaging is thus a tool to better understand the mechanical behaviour of cellulosic fibres, which have a complex, hierarchical structure.
In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor of ten and beyond. Recent advances in ptychography have demonstrated that one c an image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an X-Y translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10x and more are achievable. Furthermore, experimental results from our proof-of-concept systems show resolution gains of 4x-7x for real scenes. Finally, we introduce and analyze in simulation a new strategy to capture macroscopic Fourier Ptychography images in a single snapshot, albeit using a camera array.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا