ترغب بنشر مسار تعليمي؟ اضغط هنا

Regularization and Inverse Problems

168   0   0.0 ( 0 )
 نشر من قبل Belen Barreiro
 تاريخ النشر 2001
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

An overview is given of Bayesian inversion and regularization procedures. In particular, the conceptual basis of the maximum entropy method (MEM) is discussed, and extensions to positive/negative and complex data are highlighted. Other deconvolution methods are also discussed within the Bayesian context, focusing mainly on the comparison of Wiener filtering, Massive Inference and the Pixon method, using examples from both astronomical and non-astronomical applications.



قيم البحث

اقرأ أيضاً

In this paper, we consider the variational regularization of manifold-valued data in the inverse problems setting. In particular, we consider TV and TGV regularization for manifold-valued data with indirect measurement operators. We provide results o n the well-posedness and present algorithms for a numerical realization of these models in the manifold setup. Further, we provide experimental results for synthetic and real data to show the potential of the proposed schemes for applications.
There are various inverse problems -- including reconstruction problems arising in medical imaging -- where one is often aware of the forward operator that maps variables of interest to the observations. It is therefore natural to ask whether such kn owledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. In this paper, we provide one such way via an analysis of the generalisation error of deep learning methods applicable to inverse problems. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the size of the training set, the Jacobian of the deep neural network and the Jacobian of the composition of the forward operator with the neural network. We then propose a plug-and-play regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also propose a new method allowing us to tightly upper bound the Lipschitz constants of the relevant functions that is much more computational efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing setup and accelerated Magnetic Resonance Imaging (MRI).
The characteristic feature of inverse problems is their instability with respect to data perturbations. In order to stabilize the inversion process, regularization methods have to be developed and applied. In this work we introduce and analyze the co ncept of filtered diagonal frame decomposition which extends the standard filtered singular value decomposition to the frame case. Frames as generalized singular system allows to better adapt to a given class of potential solutions. In this paper, we show that filtered diagonal frame decomposition yield a convergent regularization method. Moreover, we derive convergence rates under source type conditions and prove order optimality under the assumption that the considered frame is a Riesz-basis.
We analyze sparse frame based regularization of inverse problems by means of a diagonal frame decomposition (DFD) for the forward operator, which generalizes the SVD. The DFD allows to define a non-iterative (direct) operator-adapted frame thresholdi ng approach which we show to provide a convergent regularization method with linear convergence rates. These results will be compared to the well-known analysis and synthesis variants of sparse $ell^1$-regularization which are usually implemented thorough iterative schemes. If the frame is a basis (non-redundant case), the thr
In this work, we describe a new approach that uses deep neural networks (DNN) to obtain regularization parameters for solving inverse problems. We consider a supervised learning approach, where a network is trained to approximate the mapping from obs ervation data to regularization parameters. Once the network is trained, regularization parameters for newly obtained data can be computed by efficient forward propagation of the DNN. We show that a wide variety of regularization functionals, forward models, and noise models may be considered. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. We emphasize that the key advantage of using DNNs for learning regularization parameters, compared to previous works on learning via optimal experimental design or empirical Bayes risk minimization, is greater generalizability. That is, rather than computing one set of parameters that is optimal with respect to one particular design objective, DNN-computed regularization parameters are tailored to the specific features or properties of the newly observed data. Thus, our approach may better handle cases where the observation is not a close representation of the training set. Furthermore, we avoid the need for expensive and challenging bilevel optimization methods as utilized in other existing training approaches. Numerical results demonstrate the potential of using DNNs to learn regularization parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا