No Arabic abstract
Pairwise comparison matrices have received substantial attention in a variety of applications, especially in rank aggregation, the task of flattening items into a one-dimensional (and thus transitive) ranking. However, non-transitive preference cycles can arise in practice due to the fact that making a decision often requires a complex evaluation of multiple factors. In some applications, it may be important to identify and preserve information about the inherent non-transitivity, either in the pairwise comparison data itself or in the latent feature space. In this work, we develop structured models for non-transitive pairwise comparison matrices that can be exploited to recover such matrices from incomplete noisy data and thus allow the detection of non-transitivity. Considering that individuals tastes and items latent features may change over time, we formulate time-varying pairwise comparison matrix recovery as a dynamic skew-symmetric matrix recovery problem by modeling changes in the low-rank factors of the pairwise comparison matrix. We provide theoretical guarantees for the recovery and numerically test the proposed theory with both synthetic and real-world data.
Compressed Sensing aims to capture attributes of a sparse signal using very few measurements. Cand`{e}s and Tao showed that sparse reconstruction is possible if the sensing matrix acts as a near isometry on all $boldsymbol{k}$-sparse signals. This property holds with overwhelming probability if the entries of the matrix are generated by an iid Gaussian or Bernoulli process. There has been significant recent interest in an alternative signal processing framework; exploiting deterministic sensing matrices that with overwhelming probability act as a near isometry on $boldsymbol{k}$-sparse vectors with uniformly random support, a geometric condition that is called the Statistical Restricted Isometry Property or StRIP. This paper considers a family of deterministic sensing matrices satisfying the StRIP that are based on srm codes (binary chirps) and a $boldsymbol{k}$-sparse reconstruction algorithm with sublinear complexity. In the presence of stochastic noise in the data domain, this paper derives bounds on the $boldsymbol{ell_2}$ accuracy of approximation in terms of the $boldsymbol{ell_2}$ norm of the measurement noise and the accuracy of the best $boldsymbol{k}$-sparse approximation, also measured in the $boldsymbol{ell_2}$ norm. This type of $boldsymbol{ell_2 /ell_2}$ bound is tighter than the standard $boldsymbol{ell_2 /ell_1}$ or $boldsymbol{ell_1/ ell_1}$ bounds.
We present improved sampling complexity bounds for stable and robust sparse recovery in compressed sensing. Our unified analysis based on l1 minimization encompasses the case where (i) the measurements are block-structured samples in order to reflect the structured acquisition that is often encountered in applications; (ii) the signal has an arbitrary structured sparsity, by results depending on its support S. Within this framework and under a random sign assumption, the number of measurements needed by l1 minimization can be shown to be of the same order than the one required by an oracle least-squares estimator. Moreover, these bounds can be minimized by adapting the variable density sampling to a given prior on the signal support and to the coherence of the measurements. We illustrate both numerically and analytically that our results can be successfully applied to recover Haar wavelet coefficients that are sparse in levels from random Fourier measurements in dimension one and two, which can be of particular interest in imaging problems. Finally, a preliminary numerical investigation shows the potential of this theory for devising adaptive sampling strategies in sparse polynomial approximation.
The problem of estimating a sparse signal from low dimensional noisy observations arises in many applications, including super resolution, signal deconvolution, and radar imaging. In this paper, we consider a sparse signal model with non-stationary modulations, in which each dictionary atom contributing to the observations undergoes an unknown, distinct modulation. By applying the lifting technique, under the assumption that the modulating signals live in a common subspace, we recast this sparse recovery and non-stationary blind demodulation problem as the recovery of a column-wise sparse matrix from structured linear observations, and propose to solve it via block $ell_{1}$-norm regularized quadratic minimization. Due to observation noise, the sparse signal and modulation process cannot be recovered exactly. Instead, we aim to recover the sparse support of the ground truth signal and bound the recovery errors of the signals non-zero components and the modulation process. In particular, we derive sufficient conditions on the sample complexity and regularization parameter for exact support recovery and bound the recovery error on the support. Numerical simulations verify and support our theoretical findings, and we demonstrate the effectiveness of our model in the application of single molecule imaging.
The sparsity in levels model recently inspired a new generation of effective acquisition and reconstruction modalities for compressive imaging. Moreover, it naturally arises in various areas of signal processing such as parallel acquisition, radar, and the sparse corruptions problem. Reconstruction strategies for sparse in levels signals usually rely on a suitable convex optimization program. Notably, although iterative and greedy algorithms can outperform convex optimization in terms of computational efficiency and have been studied extensively in the case of standard sparsity, little is known about their generalizations to the sparse in levels setting. In this paper, we bridge this gap by showing new stable and robust uniform recovery guarantees for sparse in level variants of the iterative hard thresholding and the CoSaMP algorithms. Our theoretical analysis generalizes recovery guarantees currently available in the case of standard sparsity and favorably compare to sparse in levels guarantees for weighted $ell^1$ minimization. In addition, we also propose and numerically test an extension of the orthogonal matching pursuit algorithm for sparse in levels signals.
Non-malleable codes protect against an adversary who can tamper with the coded message by using a tampering function in a specified function family, guaranteeing that the tampering result will only depend on the chosen function and not the coded message. The codes have been motivated for providing protection against tampering with hardware that stores the secret cryptographic keys, and have found significant attention in cryptography. Traditional Shannon model of communication systems assumes the communication channel is perfectly known to the transmitter and the receiver. Arbitrary Varying Channels (AVCs) remove this assumption and have been used to model adversarially controlled channels. Transmission over these channels has been originally studied with the goal of recovering the sent message, and more recently with the goal of detecting tampering with the sent messages. In this paper we introduce non-malleability as the protection goal of message transmission over these channels, and study binary (discrete memoryless) AVCs where possible tampering is modelled by the set of channel states. Our main result is that non-malleability for these channels is achievable at a rate asymptotically approaching 1. We also consider the setting of an AVC with a special state s*, and the additional requirement that the message must be recoverable if s* is applied to all the transmitted bits. We give the outline of a message encoding scheme that in addition to non-malleability, can provide recovery for all s* channel.