ترغب بنشر مسار تعليمي؟ اضغط هنا

Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors

86   0   0.0 ( 0 )
 نشر من قبل Moshe Mishali
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only a small set of non-adaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about the original sparse vector is captured in the measurements, and can be recovered using efficient polynomial methods. The vector model has been extended to a finite set of sparse vectors sharing a common non-zero location set. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing recovery methods to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can recovered by solving a single, reduced-size finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic recovery of a single sparse vector by randomly combining the measurement vectors. Our approach results in exact recovery of both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently recover the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given sub-optimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm is characterized by fast run time and superior recovery rate than known popular methods.

قيم البحث

اقرأ أيضاً

We study a problem of fundamental importance to ICNs, namely, minimizing routing costs by jointly optimizing caching and routing decisions over an arbitrary network topology. We consider both source routing and hop-by-hop routing settings. The respec tive offline problems are NP-hard. Nevertheless, we show that there exist polynomial time approximation algorithms producing solutions within a constant approximation from the optimal. We also produce distributed, adaptive algorithms with the same approximation guarantees. We simulate our adaptive algorithms over a broad array of different topologies. Our algorithms reduce routing costs by several orders of magnitude compared to prior art, including algorithms optimizing caching under fixed routing.
We consider a similarity measure between two sets $A$ and $B$ of vectors, that balances the average and maximum cosine distance between pairs of vectors, one from set $A$ and one from set $B$. As a motivation for this measure, we present lineage trac king in a database. To practically realize this measure, we need an approximate search algorithm that given a set of vectors $A$ and sets of vectors $B_1,...,B_n$, the algorithm quickly locates the set $B_i$ that maximizes the similarity measure. For the case where all sets are singleton sets, essentially each is a single vector, there are known efficient approximate search algorithms, e.g., approximat
108 - Margaret M. Bayer 1999
The closed cone of flag vectors of Eulerian partially ordered sets is studied. It is completely determined up through rank seven. Half-Eulerian posets are defined. Certain limit posets of Billera and Hetyei are half-Eulerian; they give rise to extrem e rays of the cone for Eulerian posets. A new family of linear inequalities valid for flag vectors of Eulerian posets is given.
This paper proposes several algorithms and their Cellular Automata Machine (CAM) for drawing the State Transition Diagram (STD) of an arbitrary Cellular Automata (CA) Rule (any neighborhood, uniform/ hybrid and null/ periodic boundary) and length of the CA n. It also discusses the novelty, hardware cost and the complexities of these algorithms.
We consider the problem of recovering $n$ i.i.d samples from a zero mean multivariate Gaussian distribution with an unknown covariance matrix, from their modulo wrapped measurements, i.e., measurement where each coordinate is reduced modulo $Delta$, for some $Delta>0$. For this setup, which is motivated by quantization and analog-to-digital conversion, we develop a low-complexity iterative decoding algorithm. We show that if a benchmark informed decoder that knows the covariance matrix can recover each sample with small error probability, and $n$ is large enough, the performance of the proposed blind recovery algorithm closely follows that of the informed one. We complement the analysis with numeric results that show that the algorithm performs well even in non-asymptotic conditions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا