ترغب بنشر مسار تعليمي؟ اضغط هنا

A Sampling Framework for Solving Physics-driven Inverse Source Problems

78   0   0.0 ( 0 )
 نشر من قبل John Murray-Bruce
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Partial differential equations are central to describing many physical phenomena. In many applications these phenomena are observed through a sensor network, with the aim of inferring their underlying properties. Leveraging from certain results in sampling and approximation theory, we present a new framework for solving a class of inverse source problems for physical fields governed by linear partial differential equations. Specifically, we demonstrate that the unknown field sources can be recovered from a sequence of, so called, generalised measurements by using multidimensional frequency estimation techniques. Next we show that---for physics-driven fields---this sequence of generalised measurements can be estimated by computing a linear weighted-sum of the sensor measurements; whereby the exact weights (of the sums) correspond to those that reproduce multidimensional exponentials, when used to linearly combine translates of a particular prototype function related to the Greens function of our underlying field. Explicit formulae are then derived for the sequence of weights, that map sensor samples to the exact sequence of generalised measurements when the Greens function satisfies the generalised Strang-Fix condition. Otherwise, the same mapping yields a close approximation of the generalised measurements. Based on this new framework we develop practical, noise robust, sensor network strategies for solving the inverse source problem, and then present numerical simulation results to verify their performance.



قيم البحث

اقرأ أيضاً

Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as gener ative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals $cal Q$, ${cal Q}subset{R}^n$, and a corresponding generative function $g:{cal U}^krightarrow {R}^n$, ${cal U}subset {R}$, such that $sup_{{bf x}in {cal Q}}min_{{bf u}in{cal U}^k}{1over sqrt{n}}|g({bf u})-{bf x}|leq delta$. A recovery method based on $g$ seeks $g({bf u})$ with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as $k$ and $n$ grow without bound and $delta$ converges to zero, if the number of measurements ($m$) is larger than the input dimension of the generative model ($k$), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions $f:{R}^nto{cal U}^k$ and $g:{cal U}^kto{R}^n$, respectively. We theoretically prove that, roughly, given $m>40klog{1over delta}$ measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented.
Despite the great promise of the physics-informed neural networks (PINNs) in solving forward and inverse problems, several technical challenges are present as roadblocks for more complex and realistic applications. First, most existing PINNs are base d on point-wise formulation with fully-connected networks to learn continuous functions, which suffer from poor scalability and hard boundary enforcement. Second, the infinite search space over-complicates the non-convex optimization for network training. Third, although the convolutional neural network (CNN)-based discrete learning can significantly improve training efficiency, CNNs struggle to handle irregular geometries with unstructured meshes. To properly address these challenges, we present a novel discrete PINN framework based on graph convolutional network (GCN) and variational structure of PDE to solve forward and inverse partial differential equations (PDEs) in a unified manner. The use of a piecewise polynomial basis can reduce the dimension of search space and facilitate training and convergence. Without the need of tuning penalty parameters in classic PINNs, the proposed method can strictly impose boundary conditions and assimilate sparse data in both forward and inverse settings. The flexibility of GCNs is leveraged for irregular geometries with unstructured meshes. The effectiveness and merit of the proposed method are demonstrated over a variety of forward and inverse computational mechanics problems governed by both linear and nonlinear PDEs.
We study image inverse problems with a normalizing flow prior. Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements. This formulation allows us to use noise models with arbitrary depende ncies as well as non-linear forward operators. We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns. We also present initial theoretical recovery guarantees for solving inverse problems with a flow prior.
We study four problems namely, Campbells source coding problem, Arikans guessing problem, Huieihel et al.s memoryless guessing problem, and Bunte and Lapidoths task partitioning problem. We observe a close relationship among these problems. In all th ese problems, the objective is to minimize moments of some functions of random variables, and Renyi entropy and Sundaresans divergence arise as optimal solutions. This motivates us to establish a connection among these four problems. In this paper, we study a more general problem and show that R{e}nyi and Shannon entropies arise as its solution. We show that the problems on source coding, guessing and task partitioning are particular instances of this general optimization problem, and derive the lower bounds using this framework. We also refine some known results and present new results for mismatched version of these problems using a unified approach. We strongly feel that this generalization would, in addition to help in understanding the similarities and distinctiveness of these problems, also help to solve any new problem that falls in this framework.
150 - Xiaodong Liu , Shixu Meng 2021
We consider the inverse source problems with multi-frequency sparse near field measurements. In contrast to the existing near field operator based on the integral over the space variable, a multi-frequency near field operator is introduced based on t he integral over the frequency variable. A factorization of this multi-frequency near field operator is further given and analysed. Motivated by such a factorization, we introduce a multi-frequency sampling method to reconstruct the source support. Its theoretical foundation is then derived from the properties of the factorized operators and a properly chosen point spread function. Numerical examples are provided to illustrate the multi-frequency sampling method with sparse near field measurements. Finally we briefly discuss how to extend the near field case to the far field case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا