No Arabic abstract
The sparsity and compressibility of finite-dimensional signals are of great interest in fields such as compressed sensing. The notion of compressibility is also extended to infinite sequences of i.i.d. or ergodic random variables based on the observed error in their nonlinear k-term approximation. In this work, we use the entropy measure to study the compressibility of continuous-domain innovation processes (alternatively known as white noise). Specifically, we define such a measure as the entropy limit of the doubly quantized (time and amplitude) process. This provides a tool to compare the compressibility of various innovation processes. It also allows us to identify an analogue of the concept of entropy dimension which was originally defined by Renyi for random variables. Particular attention is given to stable and impulsive Poisson innovation processes. Here, our results recognize Poisson innovations as the more compressible ones with an entropy measure far below that of stable innovations. While this result departs from the previous knowledge regarding the compressibility of fat-tailed distributions, our entropy measure ranks stable innovations according to their tail decay.
In this paper, we propose an reconfigurable intelligent surface (RIS) enhanced spectrum sensing system, in which the primary transmitter is equipped with single antenna, the secondary transmitter is equipped with multiple antennas, and the RIS is employed to improve the detection performance. Without loss of generality, we adopt the maximum eigenvalue detection approach, and propose a corresponding analytical framework based on large dimensional random matrix theory, to evaluate the detection probability in the asymptotic regime. Besides, the phase shift matrix of the RIS is designed with only the statistical channel state information (CSI), which is shown to be quite effective when the RIS-related channels are of Rician fading or line-of-sight (LoS). With the designed phase shift matrix, the asymptotic distributions of the equivalent channel gains are derived. Then, we provide the theoretical predictions about the number of reflecting elements (REs) required to achieve a detection probability close to 1. Finally, we present the Monte-Carlo simulation results to evaluate the accuracy of the proposed asymptotic analytical framework for the detection probability and the validity of the theoretical predictions about the number of REs required to achieve a detection probability close to 1. Moreover, the simulation results show that the proposed RIS-enhanced spectrum sensing system can substantially improve the detection performance.
Due to hardware limitations, the phase shifts of the reflecting elements of reconfigurable intelligent surfaces (RISs) need to be quantized into discrete values. This letter aims to unveil the minimum required number of phase quantization levels $L$ in order to achieve the full diversity order in RIS-assisted wireless communication systems. With the aid of an upper bound of the outage probability, we first prove that the full diversity order is achievable provided that $L$ is not less than three. If $L=2$, on the other hand, we prove that the achievable diversity order cannot exceed $(N+1)/2$, where $N$ is the number of reflecting elements. This is obtained with the aid of a lower bound of the outage probability. Therefore, we prove that the minimum required value of $L$ to achieve the full diversity order is $L=3$. Simulation results verify the theoretical analysis and the impact of phase quantization levels on RIS-assisted communication systems.
We develop a method for the accurate reconstruction of non-bandlimited finite rate of innovation signals on the sphere. For signals consisting of a finite number of Dirac functions on the sphere, we develop an annihilating filter based method for the accurate recovery of parameters of the Dirac functions using a finite number of observations of the bandlimited signal. In comparison to existing techniques, the proposed method enables more accurate reconstruction primarily due to better conditioning of systems involved in the recovery of parameters. For the recovery of $K$ Diracs on the sphere, the proposed method requires samples of the signal bandlimited in the spherical harmonic~(SH) domain at SH degree equal or greater than $ K + sqrt{K + frac{1}{4}} - frac{1}{2}$. In comparison to the existing state-of-the art technique, the required bandlimit, and consequently the number of samples, of the proposed method is the same or less. We also conduct numerical experiments to demonstrate that the proposed technique is more accurate than the existing methods by a factor of $10^{7}$ or more for $2 le Kle 20$.
Taylors law quantifies the scaling properties of the fluctuations of the number of innovations occurring in open systems. Urn based modelling schemes have already proven to be effective in modelling this complex behaviour. Here, we present analytical estimations of Taylors law exponents in such models, by leveraging on their representation in terms of triangular urn models. We also highlight the correspondence of these models with Poisson-Dirichlet processes and demonstrate how a non-trivial Taylors law exponent is a kind of universal feature in systems related to human activities. We base this result on the analysis of four collections of data generated by human activity: (i) written language (from a Gutenberg corpus); (ii) a n online music website (Last.fm); (iii) Twitter hashtags; (iv) a on-line collaborative tagging system (Del.icio.us). While Taylors law observed in the last two datasets agrees with the plain model predictions, we need to introduce a generalization to fully characterize the behaviour of the first two datasets, where temporal correlations are possibly more relevant. We suggest that Taylors law is a fundamental complement to Zipfs and Heaps laws in unveiling the complex dynamical processes underlying the evolution of systems featuring innovation.
Denoising stationary process $(X_i)_{i in Z}$ corrupted by additive white Gaussian noise is a classic and fundamental problem in information theory and statistical signal processing. Despite considerable progress in designing efficient denoising algorithms, for general analog sources, theoretically-founded computationally-efficient methods are yet to be found. For instance in denoising $X^n$ corrupted by noise $Z^n$ as $Y^n=X^n+Z^n$, given the full distribution of $X^n$, a minimum mean square error (MMSE) denoiser needs to compute $E[X^n|Y^n]$. However, for general sources, computing $E[X^n|Y^n]$ is computationally very challenging, if not infeasible. In this paper, starting by a Bayesian setup, where the source distribution is fully known, a novel denoising method, namely, quantized maximum a posteriori (Q-MAP) denoiser, is proposed and its asymptotic performance in the high signal to noise ratio regime is analyzed. Both for memoryless sources, and for structured first-order Markov sources, it is shown that, asymptotically, as $sigma$ converges to zero, ${1over sigma^2}E[(X_i-hat{X}^{rm Q-MAP}_i)^2]$ achieved by Q-MAP denoiser converges to the information dimension of the source. For the studied memoryless sources, this limit is known to be optimal. A key advantage of the Q-MAP denoiser is that, unlike an MMSE denoiser, it highlights the key properties of the source distribution that are to be used in its denoising. This property dramatically reduces the computational complexity of approximating the solution of the Q-MAP denoiser. Additionally, it naturally leads to a learning-based denoiser. Using ImageNet database for training, initial simulation results exploring the performance of such a learning-based denoiser in image denoising are presented.