ترغب بنشر مسار تعليمي؟ اضغط هنا

Sparse polynomial interpolation: sparse recovery, super resolution, or Prony?

59   0   0.0 ( 0 )
 نشر من قبل Jean Bernard Lasserre
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English
 تأليف Cedric Josz




اسأل ChatGPT حول البحث

We show that the sparse polynomial interpolation problem reduces to a discrete super-resolution problem on the $n$-dimensional torus. Therefore the semidefinite programming approach initiated by Cand`es & Fernandez-Granda cite{candes_towards_2014} in the univariate case can be applied. We extend their result to the multivariate case, i.e., we show that exact recovery is guaranteed provided that a geometric spacing condition on the supports holds and the number of evaluations are sufficiently many (but not many). It also turns out that the sparse recovery LP-formulation of $ell_1$-norm minimization is also guaranteed to provide exact recovery {it provided that} theevaluations are made in a certain manner and even though the Restricted Isometry Property for exact recovery is not satisfied. (A naive sparse recovery LP-approach does not offer such a guarantee.) Finally we also describe the algebraic Prony method for sparse interpolation, which also recovers the exact decomposition but from less point evaluations and with no geometric spacing condition. We provide two sets of numerical experiments, one in which the super-resolution technique and Pronys method seem to cope equally well with noise, and another in which the super-resolution technique seems to cope with noise better than Pronys method, at the cost of an extra computational burden (i.e. a semidefinite optimization).



قيم البحث

اقرأ أيضاً

A sparse regression approach for the computation of high-dimensional optimal feedback laws arising in deterministic nonlinear control is proposed. The approach exploits the control-theoretical link between Hamilton-Jacobi-Bellman PDEs characterizing the value function of the optimal control problems, and first-order optimality conditions via Pontryagins Maximum Principle. The latter is used as a representation formula to recover the value function and its gradient at arbitrary points in the space-time domain through the solution of a two-point boundary value problem. After generating a dataset consisting of different state-value pairs, a hyperbolic cross polynomial model for the value function is fitted using a LASSO regression. An extended set of low and high-dimensional numerical tests in nonlinear optimal control reveal that enriching the dataset with gradient information reduces the number of training samples, and that the sparse polynomial regression consistently yields a feedback law of lower complexity.
Given a straight-line program whose output is a polynomial function of the inputs, we present a new algorithm to compute a concise representation of that unknown function. Our algorithm can handle any case where the unknown function is a multivariate polynomial, with coefficients in an arbitrary finite field, and with a reasonable number of nonzero terms but possibly very large degree. It is competitive with previously known sparse interpolation algorithms that work over an arbitrary finite field, and provides an improvement when there are a large number of variables.
In a variety of fields, in particular those involving imaging and optics, we often measure signals whose phase is missing or has been irremediably distorted. Phase retrieval attempts to recover the phase information of a signal from the magnitude of its Fourier transform to enable the reconstruction of the original signal. Solving the phase retrieval problem is equivalent to recovering a signal from its auto-correlation function. In this paper, we assume the original signal to be sparse; this is a natural assumption in many applications, such as X-ray crystallography, speckle imaging and blind channel estimation. We propose an algorithm that resolves the phase retrieval problem in three stages: i) we leverage the finite rate of innovation sampling theory to super-resolve the auto-correlation function from a limited number of samples, ii) we design a greedy algorithm that identifies the locations of a sparse solution given the super-resolved auto-correlation function, iii) we recover the amplitudes of the atoms given their locations and the measured auto-correlation function. Unlike traditional approaches that recover a discrete approximation of the underlying signal, our algorithm estimates the signal on a continuous domain, which makes it the first of its kind. Along with the algorithm, we derive its performance bound with a theoretical analysis and propose a set of enhancements to improve its computational complexity and noise resilience. Finally, we demonstrate the benefits of the proposed method via a comparison against Charge Flipping, a notable algorithm in crystallography.
In this paper, we will generate a convex iterative FP thresholding algorithm to solve the problem $(FP^{lambda}_{a})$. Two schemes of convex iterative FP thresholding algorithms are generated. One is convex iterative FP thresholding algorithm-Scheme 1 and the other is convex iterative FP thresholding algorithm-Scheme 2. A global convergence theorem is proved for the convex iterative FP thresholding algorithm-Scheme 1. Under an adaptive rule, the convex iterative FP thresholding algorithm-Scheme 2 will be adaptive both for the choice of the regularized parameter $lambda$ and parameter $a$. These are the advantages for our two schemes of convex iterative FP thresholding algorithm compared with our previous proposed two schemes of iterative FP thresholding algorithm. At last, we provide a series of numerical simulations to test the performance of the convex iterative FP thresholding algorithm-Scheme 2, and the simulation results show that our convex iterative FP thresholding algorithm-Scheme 2 performs very well in recovering a sparse signal.
We recover jump-sparse and sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse and sparse reconstructions, respectively. We highlight the capability of the method by comparing it with classical and recent approaches such as TV minimization (jump-sparse signals), orthogonal matching pursuit, iterative hard thresholding, and iteratively reweighted $ell^1$ minimization (sparse signals).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا