ترغب بنشر مسار تعليمي؟ اضغط هنا

Numerical Solution of Inverse Problems by Weak Adversarial Networks

149   0   0.0 ( 0 )
 نشر من قبل Yaohua Zang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a weak adversarial network approach to numerically solve a class of inverse problems, including electrical impedance tomography and dynamic electrical impedance tomography problems. We leverage the weak formulation of PDE in the given inverse problem, and parameterize the solution and the test function as deep neural networks. The weak formulation and the boundary conditions induce a minimax problem of a saddle function of the network parameters. As the parameters are alternatively updated, the network gradually approximates the solution of the inverse problem. We provide theoretical justifications on the convergence of the proposed algorithm. Our method is completely mesh-free without any spatial discretization, and is particularly suitable for problems with high dimensionality and low regularity on solutions. Numerical experiments on a variety of test inverse problems demonstrate the promising accuracy and efficiency of our approach.



قيم البحث

اقرأ أيضاً

In this work, we describe a new approach that uses deep neural networks (DNN) to obtain regularization parameters for solving inverse problems. We consider a supervised learning approach, where a network is trained to approximate the mapping from obs ervation data to regularization parameters. Once the network is trained, regularization parameters for newly obtained data can be computed by efficient forward propagation of the DNN. We show that a wide variety of regularization functionals, forward models, and noise models may be considered. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. We emphasize that the key advantage of using DNNs for learning regularization parameters, compared to previous works on learning via optimal experimental design or empirical Bayes risk minimization, is greater generalizability. That is, rather than computing one set of parameters that is optimal with respect to one particular design objective, DNN-computed regularization parameters are tailored to the specific features or properties of the newly observed data. Thus, our approach may better handle cases where the observation is not a close representation of the training set. Furthermore, we avoid the need for expensive and challenging bilevel optimization methods as utilized in other existing training approaches. Numerical results demonstrate the potential of using DNNs to learn regularization parameters.
Sticky Brownian motion is the simplest example of a diffusion process that can spend finite time both in the interior of a domain and on its boundary. It arises in various applications such as in biology, materials science, and finance. This article spotlights the unusual behavior of sticky Brownian motions from the perspective of applied mathematics, and provides tools to efficiently simulate them. We show that a sticky Brownian motion arises naturally for a particle diffusing on $mathbb{R}_+$ with a strong, short-ranged potential energy near the origin. This is a limit that accurately models mesoscale particles, those with diameters $approx 100$nm-$10mu$m, which form the building blocks for many common materials. We introduce a simple and intuitive sticky random walk to simulate sticky Brownian motion, that also gives insight into its unusual properties. In parameter regimes of practical interest, we show this sticky random walk is two to five orders of magnitude faster than alternative methods to simulate a sticky Brownian motion. We outline possible steps to extend this method towards simulating multi-dimensional sticky diffusions.
The majority of model-based learned image reconstruction methods in medical imaging have been limited to uniform domains, such as pixelated images. If the underlying model is solved on nonuniform meshes, arising from a finite element method typical f or nonlinear inverse problems, interpolation and embeddings are needed. To overcome this, we present a flexible framework to extend model-based learning directly to nonuniform meshes, by interpreting the mesh as a graph and formulating our network architectures using graph convolutional neural networks. This gives rise to the proposed iterative Graph Convolutional Newton-type Method (GCNM), which includes the forward model in the solution of the inverse problem, while all updates are directly computed by the network on the problem specific mesh. We present results for Electrical Impedance Tomography, a severely ill-posed nonlinear inverse problem that is frequently solved via optimization-based methods, where the forward problem is solved by finite element methods. Results for absolute EIT imaging are compared to standard iterative methods as well as a graph residual network. We show that the GCNM has strong generalizability to different domain shapes and meshes, out of distribution data as well as experimental data, from purely simulated training data and without transfer training.
We consider a size-structured model for cell division and address the question of determining the division (birth) rate from the measured stable size distribution of the population. We propose a new regularization technique based on a filtering appro ach. We prove convergence of the algorithm and validate the theoretical results by implementing numerical simulations, based on classical techniques. We compare the results for direct and inverse problems, for the filtering method and for the quasi-reversibility method proposed in [Perthame-Zubelli].
The characteristic feature of inverse problems is their instability with respect to data perturbations. In order to stabilize the inversion process, regularization methods have to be developed and applied. In this work we introduce and analyze the co ncept of filtered diagonal frame decomposition which extends the standard filtered singular value decomposition to the frame case. Frames as generalized singular system allows to better adapt to a given class of potential solutions. In this paper, we show that filtered diagonal frame decomposition yield a convergent regularization method. Moreover, we derive convergence rates under source type conditions and prove order optimality under the assumption that the considered frame is a Riesz-basis.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا