No Arabic abstract
Subsurface applications including geothermal, geological carbon sequestration, oil and gas, etc., typically involve maximizing either the extraction of energy or the storage of fluids. Characterizing the subsurface is extremely complex due to heterogeneity and anisotropy. Due to this complexity, there are uncertainties in the subsurface parameters, which need to be estimated from multiple diverse as well as fragmented data streams. In this paper, we present a non-intrusive sequential inversion framework, for integrating data from geophysical and flow sources to constraint subsurface Discrete Fracture Networks (DFN). In this approach, we first estimate bounds on the statistics for the DFN fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained based on the flow data. The efficacy of this multi-physics based sequential inversion is demonstrated through a representative synthetic example.
In this paper, five different approaches for reduced-order modeling of brittle fracture in geomaterials, specifically concrete, are presented and compared. Four of the five methods rely on machine learning (ML) algorithms to approximate important aspects of the brittle fracture problem. In addition to the ML algorithms, each method incorporates different physics-based assumptions in order to reduce the computational complexity while maintaining the physics as much as possible. This work specifically focuses on using the ML approaches to model a 2D concrete sample under low strain rate pure tensile loading conditions with 20 preexisting cracks present. A high-fidelity finite element-discrete element model is used to both produce a training dataset of 150 simulations and an additional 35 simulations for validation. Results from the ML approaches are directly compared against the results from the high-fidelity model. Strengths and weaknesses of each approach are discussed and the most important conclusion is that a combination of physics-informed and data-driven features are necessary for emulating the physics of crack propagation, interaction and coalescence. All of the models presented here have runtimes that are orders of magnitude faster than the original high-fidelity model and pave the path for developing accurate reduced order models that could be used to inform larger length-scale models with important sub-scale physics that often cannot be accounted for due to computational cost.
The main purpose of this work is to simulate two-phase flow in the form of immiscible displacement through anisotropic, three-dimensional (3D) discrete fracture networks (DFN). The considered DFNs are artificially generated, based on a general distribution function or are conditioned on measured data from deep geological investigations. We introduce several modifications to the invasion percolation (MIP) to incorporate fracture inclinations, intersection lines, as well as the hydraulic path length inside the fractures. Additionally a trapping algorithm is implemented that forbids any advance of the invading fluid into a region, where the defending fluid is completely encircled by the invader and has no escape route. We study invasion, saturation, and flow through artificial fracture networks, with varying anisotropy and size and finally compare our findings to well studied, conditioned fracture networks.
The goal of this paper is to assess the utility of Reduced-Order Models (ROMs) developed from 3D physics-based models for predicting transient thermal power output for an enhanced geothermal reservoir while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on Latin Hypercube Sampling (LHS) of model inputs drawn from uniform probability distributions. Key sensitive parameters are identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. The inputs for ROMs are based on these key sensitive parameters. The ROMs are then used to evaluate the influence of subsurface attributes on thermal power production curves. The resulting ROMs are compared with field-data and the detailed physics-based numerical simulations. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data, and is relatively parsimonious. ROM-2 is a more complex model than ROM-1 but it accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation at low fracture zone permeabilities. ROM-3 is developed by taking the best aspects of ROM-1 and ROM-2 and provides a middle ground for model parsimony. It is able to describe various features of numerical simulations and field-data. From the proposed workflow, we demonstrate that the proposed simple ROMs are able to capture various complex features of the power production curves of Fenton Hill HDR system. For typical EGS applications, ROM-2 and ROM-3 outperform ROM-1.
We consider financial networks, where banks are connected by contracts such as debts or credit default swaps. We study the clearing problem in these systems: we want to know which banks end up in a default, and what portion of their liabilities can these defaulting banks fulfill. We analyze these networks in a sequential model where banks announce their default one at a time, and the system evolves in a step-by-step manner. We first consider the reversible model of these systems, where banks may return from a default. We show that the stabilization time in this model can heavily depend on the ordering of announcements. However, we also show that there are systems where for any choice of ordering, the process lasts for an exponential number of steps before an eventual stabilization. We also show that finding the ordering with the smallest (or largest) number of banks ending up in default is an NP-hard problem. Furthermore, we prove that defaulting early can be an advantageous strategy for banks in some cases, and in general, finding the best time for a default announcement is NP-hard. Finally, we discuss how changing some properties of this setting affects the stabilization time of the process, and then use these techniques to devise a monotone model of the systems, which ensures that every network stabilizes eventually.
This paper was prompted by numerical experiments we performed, in which algorithms already available in the literature (DVS-BDDM) yielded accelerations (or speedups) many times larger (more than seventy in some examples already treated, but probably often much larger) than the number of processors used. Based on these outstanding results, here it is shown that believing in the standard ideal speedup, which is taken to be equal to the number of processors, has limited much the performance goal sought by research on domain decomposition methods (DDM) and has hindered much its development, thus far. Hence, an improved theory in which the speedup goal is based on the Divide and Conquer algorithmic paradigm, frequently considered as the leitmotiv of domain decomposition methods, is proposed as a suitable setting for the DDM of the future.