ترغب بنشر مسار تعليمي؟ اضغط هنا

Classification and Reconstruction of High-Dimensional Signals from Low-Dimensional Features in the Presence of Side Information

109   0   0.0 ( 0 )
 نشر من قبل Francesco Renna
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper offers a characterization of fundamental limits on the classification and reconstruction of high-dimensional signals from low-dimensional features, in the presence of side information. We consider a scenario where a decoder has access both to linear features of the signal of interest and to linear features of the side information signal; while the side information may be in a compressed form, the objective is recovery or classification of the primary signal, not the side information. The signal of interest and the side information are each assumed to have (distinct) latent discrete labels; conditioned on these two labels, the signal of interest and side information are drawn from a multivariate Gaussian distribution. With joint probabilities on the latent labels, the overall signal-(side information) representation is defined by a Gaussian mixture model. We then provide sharp sufficient and/or necessary conditions for these quantities to approach zero when the covariance matrices of the Gaussians are nearly low-rank. These conditions, which are reminiscent of the well-known Slepian-Wolf and Wyner-Ziv conditions, are a function of the number of linear features extracted from the signal of interest, the number of linear features extracted from the side information signal, and the geometry of these signals and their interplay. Moreover, on assuming that the signal of interest and the side information obey such an approximately low-rank model, we derive expansions of the reconstruction error as a function of the deviation from an exactly low-rank model; such expansions also allow identification of operational regimes where the impact of side information on signal reconstruction is most relevant. Our framework, which offers a principled mechanism to integrate side information in high-dimensional data problems, is also tested in the context of imaging applications.



قيم البحث

اقرأ أيضاً

In standard rounding, we want to map each value $X$ in a large continuous space (e.g., $R$) to a nearby point $P$ from a discrete subset (e.g., $Z$). This process seems to be inherently discontinuous in the sense that two consecutive noisy measuremen ts $X_1$ and $X_2$ of the same value may be extremely close to each other and yet they can be rounded to different points $P_1 e P_2$, which is undesirable in many applications. In this paper we show how to make the rounding process perfectly continuous in the sense that it maps any pair of sufficiently close measurements to the same point. We call such a process consistent rounding, and make it possible by allowing a small amount of information about the first measurement $X_1$ to be unidirectionally communicated to and used by the rounding process of $X_2$. The fault tolerance of a consistent rounding scheme is defined by the maximum distance between pairs of measurements which guarantees that they are always rounded to the same point, and our goal is to study the possible tradeoffs between the amount of information provided and the achievable fault tolerance for various types of spaces. When the measurements $X_i$ are arbitrary vectors in $R^d$, we show that communicating $log_2(d+1)$ bits of information is both sufficient and necessary (in the worst case) in order to achieve consistent rounding for some positive fault tolerance, and when d=3 we obtain a tight upper and lower asymptotic bound of $(0.561+o(1))k^{1/3}$ on the achievable fault tolerance when we reveal $log_2(k)$ bits of information about how $X_1$ was rounded. We analyze the problem by considering the possible colored tilings of the space with $k$ available colors, and obtain our upper and lower bounds with a variety of mathematical techniques including isoperimetric inequalities, the Brunn-Minkowski theorem, sphere packing bounds, and v{C}ech cohomology.
250 - Zhongxing Sun , Wei Cui , 2021
This paper is concerned with the problem of recovering a structured signal from a relatively small number of corrupted random measurements. Sharp phase transitions have been numerically observed in practice when different convex programming procedure s are used to solve this problem. This paper is devoted to presenting theoretical explanations for these phenomenons by employing some basic tools from Gaussian process theory. Specifically, we identify the precise locations of the phase transitions for both constrained and penalized recovery procedures. Our theoretical results show that these phase transitions are determined by some geometric measures of structure, e.g., the spherical Gaussian width of a tangent cone and the Gaussian (squared) distance to a scaled subdifferential. By utilizing the established phase transition theory, we further investigate the relationship between these two kinds of recovery procedures, which also reveals an optimal strategy (in the sense of Lagrange theory) for choosing the tradeoff parameter in the penalized recovery procedure. Numerical experiments are provided to verify our theoretical results.
Consider the problem of learning the drift coefficient of a $p$-dimensional stochastic differential equation from a sample path of length $T$. We assume that the drift is parametrized by a high-dimensional vector, and study the support recovery probl em when both $p$ and $T$ can tend to infinity. In particular, we prove a general lower bound on the sample-complexity $T$ by using a characterization of mutual information as a time integral of conditional variance, due to Kadota, Zakai, and Ziv. For linear stochastic differential equations, the drift coefficient is parametrized by a $ptimes p$ matrix which describes which degrees of freedom interact under the dynamics. In this case, we analyze a $ell_1$-regularized least squares estimator and prove an upper bound on $T$ that nearly matches the lower bound on specific classes of sparse matrices.
We consider the problem of identifying universal low-dimensional features from high-dimensional data for inference tasks in settings involving learning. For such problems, we introduce natural notions of universality and we show a local equivalence a mong them. Our analysis is naturally expressed via information geometry, and represents a conceptually and computationally useful analysis. The development reveals the complementary roles of the singular value decomposition, Hirschfeld-Gebelein-Renyi maximal correlation, the canonical correlation and principle component analyses of Hotelling and Pearson, Tishbys information bottleneck, Wyners common information, Ky Fan $k$-norms, and Brieman and Friedmans alternating conditional expectations algorithm. We further illustrate how this framework facilitates understanding and optimizing aspects of learning systems, including multinomial logistic (softmax) regression and the associated neural network architecture, matrix factorization methods for collaborative filtering and other applications, rank-constrained multivariate linear regression, and forms of semi-supervised learning.
242 - Arthur Cunha , Minh Do , 2009
The {it plenoptic function} (Adelson and Bergen, 91) describes the visual information available to an observer at any point in space and time. Samples of the plenoptic function (POF) are seen in video and in general visual content, and represent larg e amounts of information. In this paper we propose a stochastic model to study the compression limits of the plenoptic function. In the proposed framework, we isolate the two fundamental sources of information in the POF: the one representing the camera motion and the other representing the information complexity of the reality being acquired and transmitted. The sources of information are combined, generating a stochastic process that we study in detail. We first propose a model for ensembles of realities that do not change over time. The proposed model is simple in that it enables us to derive precise coding bounds in the information-theoretic sense that are sharp in a number of cases of practical interest. For this simple case of static realities and camera motion, our results indicate that coding practice is in accordance with optimal coding from an information-theoretic standpoint. The model is further extended to account for visual realities that change over time. We derive bounds on the lossless and lossy information rates for this dynamic reality model, stating conditions under which the bounds are tight. Examples with synthetic sources suggest that in the presence of scene dynamics, simple hybrid coding using motion/displacement estimation with DPCM performs considerably suboptimally relative to the true rate-distortion bound.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا