ترغب بنشر مسار تعليمي؟ اضغط هنا

Denoising of structured random processes

53   0   0.0 ( 0 )
 نشر من قبل Shirin Jalali
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Denoising stationary process $(X_i)_{i in Z}$ corrupted by additive white Gaussian noise is a classic and fundamental problem in information theory and statistical signal processing. Despite considerable progress in designing efficient denoising algorithms, for general analog sources, theoretically-founded computationally-efficient methods are yet to be found. For instance in denoising $X^n$ corrupted by noise $Z^n$ as $Y^n=X^n+Z^n$, given the full distribution of $X^n$, a minimum mean square error (MMSE) denoiser needs to compute $E[X^n|Y^n]$. However, for general sources, computing $E[X^n|Y^n]$ is computationally very challenging, if not infeasible. In this paper, starting by a Bayesian setup, where the source distribution is fully known, a novel denoising method, namely, quantized maximum a posteriori (Q-MAP) denoiser, is proposed and its asymptotic performance in the high signal to noise ratio regime is analyzed. Both for memoryless sources, and for structured first-order Markov sources, it is shown that, asymptotically, as $sigma$ converges to zero, ${1over sigma^2}E[(X_i-hat{X}^{rm Q-MAP}_i)^2]$ achieved by Q-MAP denoiser converges to the information dimension of the source. For the studied memoryless sources, this limit is known to be optimal. A key advantage of the Q-MAP denoiser is that, unlike an MMSE denoiser, it highlights the key properties of the source distribution that are to be used in its denoising. This property dramatically reduces the computational complexity of approximating the solution of the Q-MAP denoiser. Additionally, it naturally leads to a learning-based denoiser. Using ImageNet database for training, initial simulation results exploring the performance of such a learning-based denoiser in image denoising are presented.

قيم البحث

اقرأ أيضاً

We draw a random subset of $k$ rows from a frame with $n$ rows (vectors) and $m$ columns (dimensions), where $k$ and $m$ are proportional to $n$. For a variety of important deterministic equiangular tight frames (ETFs) and tight non-ETF frames, we co nsider the distribution of singular values of the $k$-subset matrix. We observe that for large $n$ they can be precisely described by a known probability distribution -- Wachters MANOVA spectral distribution, a phenomenon that was previously known only for two types of random frames. In terms of convergence to this limit, the $k$-subset matrix from all these frames is shown to be empirically indistinguishable from the classical MANOVA (Jacobi) random matrix ensemble. Thus empirically the MANOVA ensemble offers a universal description of the spectra of randomly selected $k$-subframes, even those taken from deterministic frames. The same universality phenomena is shown to hold for notable random frames as well. This description enables exact calculations of properties of solutions for systems of linear equations based on a random choice of $k$ frame vectors out of $n$ possible vectors, and has a variety of implications for erasure coding, compressed sensing, and sparse recovery. When the aspect ratio $m/n$ is small, the MANOVA spectrum tends to the well known Marcenko-Pastur distribution of the singular values of a Gaussian matrix, in agreement with previous work on highly redundant frames. Our results are empirical, but they are exhaustive, precise and fully reproducible.
While two hidden Markov process (HMP) resp. quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equiva lence of two HMP parametrizations $cM_1,cM_2$ resp. two QRW parametrizations $cQ_1,cQ_2$ in time $O(|S|max(N_1,N_2)^{4})$, where $N_1,N_2$ are the number of hidden states in $cM_1,cM_2$ resp. the dimension of the state spaces associated with $cQ_1,cQ_2$, and $S$ is the set of output symbols. Previously available algorithms for testing equivalence of HMPs were exponential in the number of hidden states. In case of QRWs, algorithms for testing equivalence had not yet been presented. The core subroutines of this algorithm can also be used to efficiently test hidden Markov processes and quantum random walks for ergodicity.
112 - Zhipeng Xue , Junjie Ma , 2017
Turbo compressed sensing (Turbo-CS) is an efficient iterative algorithm for sparse signal recovery with partial orthogonal sensing matrices. In this paper, we extend the Turbo-CS algorithm to solve compressed sensing problems involving more general s ignal structure, including compressive image recovery and low-rank matrix recovery. A main difficulty for such an extension is that the original Turbo-CS algorithm requires prior knowledge of the signal distribution that is usually unavailable in practice. To overcome this difficulty, we propose to redesign the Turbo-CS algorithm by employing a generic denoiser that does not depend on the prior distribution and hence the name denoising-based Turbo-CS (D-Turbo-CS). We then derive the extrinsic information for a generic denoiser by following the Turbo-CS principle. Based on that, we optimize the parametric extrinsic denoisers to minimize the output mean-square error (MSE). Explicit expressions are derived for the extrinsic SURE-LET denoiser used in compressive image denoising and also for the singular value thresholding (SVT) denoiser used in low-rank matrix denoising. We find that the dynamics of D-Turbo-CS can be well described by a scaler recursion called MSE evolution, similar to the case for Turbo-CS. Numerical results demonstrate that D-Turbo-CS considerably outperforms the counterpart algorithms in both reconstruction quality and running time.
In this study, we generalize a problem of sampling a scalar Gauss Markov Process, namely, the Ornstein-Uhlenbeck (OU) process, where the samples are sent to a remote estimator and the estimator makes a causal estimate of the observed realtime signal. In recent years, the problem is solved for stable OU processes. We present solutions for the optimal sampling policy that exhibits a smaller estimation error for both stable and unstable cases of the OU process along with a special case when the OU process turns to a Wiener process. The obtained optimal sampling policy is a threshold policy. However, the thresholds are different for all three cases. Later, we consider additional noise with the sample when the sampling decision is made beforehand. The estimator utilizes noisy samples to make an estimate of the current signal value. The mean-square error (mse) is changed from previous due to noise and the additional term in the mse is solved which provides performance upper bound and room for a pursuing further investigation on this problem to find an optimal sampling strategy that minimizes the estimation error when the observed samples are noisy. Numerical results show performance degradation caused by the additive noise.
321 - Hassan Saber 2020
We study a structured variant of the multi-armed bandit problem specified by a set of Bernoulli distributions $ u != !( u_{a,b})_{a in mathcal{A}, b in mathcal{B}}$ with means $(mu_{a,b})_{a in mathcal{A}, b in mathcal{B}}!in![0,1]^{mathcal{A}timesm athcal{B}}$ and by a given weight matrix $omega!=! (omega_{b,b})_{b,b in mathcal{B}}$, where $ mathcal{A}$ is a finite set of arms and $ mathcal{B} $ is a finite set of users. The weight matrix $omega$ is such that for any two users $b,b!in!mathcal{B}, text{max}_{ainmathcal{A}}|mu_{a,b} !-! mu_{a,b}| !leq! omega_{b,b} $. This formulation is flexible enough to capture various situations, from highly-structured scenarios ($omega!in!{0,1}^{mathcal{B}timesmathcal{B}}$) to fully unstructured setups ($omega!equiv! 1$).We consider two scenarios depending on whether the learner chooses only the actions to sample rewards from or both users and actions. We first derive problem-dependent lower bounds on the regret for this generic graph-structure that involves a structure dependent linear programming problem. Second, we adapt to this setting the Indexed Minimum Empirical Divergence (IMED) algorithm introduced by Honda and Takemura (2015), and introduce the IMED-GS$^star$ algorithm. Interestingly, IMED-GS$^star$ does not require computing the solution of the linear programming problem more than about $log(T)$ times after $T$ steps, while being provably asymptotically optimal. Also, unlike existing bandit strategies designed for other popular structures, IMED-GS$^star$ does not resort to an explicit forced exploration scheme and only makes use of local counts of empirical events. We finally provide numerical illustration of our results that confirm the performance of IMED-GS$^star$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا