ترغب بنشر مسار تعليمي؟ اضغط هنا

In this work, we present a unified framework for the performance analysis of dual-hop underwater wireless optical communication (UWOC) systems with amplify-and-forward fixed gain relays in the presence of air bubbles and temperature gradients. Operat ing under either heterodyne detection or intensity modulation with direct detection, the UWOC is modeled by the unified mixture Exponential-Generalized Gamma distribution that we have proposed based on an experiment conducted in an indoor laboratory setup and has been shown to provide an excellent fit with the measured data under the considered lab channel scenarios. More specifically, we derive the cumulative distribution function (CDF) and the probability density function of the end-to-end signal-to-noise ratio (SNR) in exact closed-form in terms of the bivariate Foxs H function. Based on this CDF expression, we present novel results for the fundamental performance metrics such as the outage probability, the average bit-error rate (BER) for various modulation schemes, and the ergodic capacity. Additionally, very tight asymptotic results for the outage probability and the average BER at high SNR are obtained in terms of simple functions. Furthermore, we demonstrate that the dual-hop UWOC system can effectively mitigate the short range and both temperature gradients and air bubbles induced turbulences, as compared to the single UWOC link. All the results are verified via computer-based Monte-Carlo simulations.
The use of quadratic discriminant analysis (QDA) or its regularized version (R-QDA) for classification is often not recommended, due to its well-acknowledged high sensitivity to the estimation noise of the covariance matrix. This becomes all the more the case in unbalanced data settings for which it has been found that R-QDA becomes equivalent to the classifier that assigns all observations to the same class. In this paper, we propose an improved R-QDA that is based on the use of two regularization parameters and a modified bias, properly chosen to avoid inappropriate behaviors of R-QDA in unbalanced settings and to ensure the best possible classification performance. The design of the proposed classifier builds on a refined asymptotic analysis of its performance when the number of samples and that of features grow large simultaneously, which allows to cope efficiently with the high-dimensionality frequently met within the big data paradigm. The performance of the proposed classifier is assessed on both real and synthetic data sets and was shown to be much better than what one would expect from a traditional R-QDA.
This paper carries out a large dimensional analysis of a variation of kernel ridge regression that we call emph{centered kernel ridge regression} (CKRR), also known in the literature as kernel ridge regression with offset. This modified technique is obtained by accounting for the bias in the regression problem resulting in the old kernel ridge regression but with emph{centered} kernels. The analysis is carried out under the assumption that the data is drawn from a Gaussian distribution and heavily relies on tools from random matrix theory (RMT). Under the regime in which the data dimension and the training size grow infinitely large with fixed ratio and under some mild assumptions controlling the data statistics, we show that both the empirical and the prediction risks converge to a deterministic quantities that describe in closed form fashion the performance of CKRR in terms of the data statistics and dimensions. Inspired by this theoretical result, we subsequently build a consistent estimator of the prediction risk based on the training data which allows to optimally tune the design parameters. A key insight of the proposed analysis is the fact that asymptotically a large class of kernels achieve the same minimum prediction risk. This insight is validated with both synthetic and real data.
This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fund amental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.
This paper is focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.
This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter v ector. The exhaustive search inspecting each of the $nchoose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.
This paper analyzes the statistical properties of the signal-to-noise ratio (SNR) at the output of the Capons minimum variance distortionless response (MVDR) beamformers when operating over impulsive noises. Particularly, we consider the supervised c ase in which the receiver employs the regularized Tyler estimator in order to estimate the covariance matrix of the interference-plus-noise process using $n$ observations of size $Ntimes 1$. The choice for the regularized Tylor estimator (RTE) is motivated by its resilience to the presence of outliers and its regularization parameter that guarantees a good conditioning of the covariance estimate. Of particular interest in this paper is the derivation of the second order statistics of the SINR. To achieve this goal, we consider two different approaches. The first one is based on considering the classical regime, referred to as the $n$-large regime, in which $N$ is assumed to be fixed while $n$ grows to infinity. The second approach is built upon recent results developed within the framework of random matrix theory and assumes that $N$ and $n$ grow large together. Numerical results are provided in order to compare between the accuracies of each regime under different settings.
This paper addresses the development of analytical tools for the computation of the moments of random Gram matrices with one side correlation. Such a question is mainly driven by applications in signal processing and wireless communications wherein s uch matrices naturally arise. In particular, we derive closed-form expressions for the inverse moments and show that the obtained results can help approximate several performance metrics such as the average estimation error corresponding to the Best Linear Unbiased Estimator (BLUE) and the Linear Minimum Mean Square Error LMMSE or also other loss functions used to measure the accuracy of covariance matrix estimates.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا