ترغب بنشر مسار تعليمي؟ اضغط هنا

Sublinear-Time Algorithms for Compressive Phase Retrieval

181   0   0.0 ( 0 )
 نشر من قبل Vasileios Nakos
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the compressive phase retrieval problem, or phaseless compressed sensing, or compressed sensing from intensity only measurements, the goal is to reconstruct a sparse or approximately $k$-sparse vector $x in mathbb{R}^n$ given access to $y= |Phi x|$, where $|v|$ denotes the vector obtained from taking the absolute value of $vinmathbb{R}^n$ coordinate-wise. In this paper we present sublinear-time algorithms for different variants of the compressive phase retrieval problem which are akin to the variants considered for the classical compressive sensing problem in theoretical computer science. Our algorithms use pure combinatorial techniques and near-optimal number of measurements.



قيم البحث

اقرأ أيضاً

164 - Soheil Behnezhad 2021
We present a near-tight analysis of the average query complexity -- `a la Nguyen and Onak [FOCS08] -- of the randomized greedy maximal matching algorithm, improving over the bound of Yoshida, Yamamoto and Ito [STOC09]. For any $n$-vertex graph of ave rage degree $bar{d}$, this leads to the following sublinear-time algorithms for estimating the size of maximum matching and minimum vertex cover, all of which are provably time-optimal up to logarithmic factors: $bullet$ A multiplicative $(2+epsilon)$-approximation in $widetilde{O}(n/epsilon^2)$ time using adjacency list queries. This (nearly) matches an $Omega(n)$ time lower bound for any multiplicative approximation and is, notably, the first $O(1)$-approximation that runs in $o(n^{1.5})$ time. $bullet$ A $(2, epsilon n)$-approximation in $widetilde{O}((bar{d} + 1)/epsilon^2)$ time using adjacency list queries. This (nearly) matches an $Omega(bar{d}+1)$ lower bound of Parnas and Ron [TCS07] which holds for any $(O(1), epsilon n)$-approximation, and improves over the bounds of [Yoshida et al. STOC09; Onak et al. SODA12] and [Kapralov et al. SODA20]: The former two take at least quadratic time in the degree which can be as large as $Omega(n^2)$ and the latter obtains a much larger approximation. $bullet$ A $(2, epsilon n)$-approximation in $widetilde{O}(n/epsilon^3)$ time using adjacency matrix queries. This (nearly) matches an $Omega(n)$ time lower bound in this model and improves over the $widetilde{O}(nsqrt{n})$-time $(2, epsilon n)$-approximate algorithm of [Chen, Kannan, and Khanna ICALP20]. It also turns out that any non-trivial multiplicative approximation in the adjacency matrix model requires $Omega(n^2)$ time, so the additive $epsilon n$ error is necessary too. As immediate corollaries, we get improved sublinear time estimators for (variants of) TSP and an improved AMPC algorithm for maximal matching.
In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $xinmathbb{R}^n$ from $m$ linear measurements $A_1 x, A_2 x,ldots, A_m x$, where $A_i$ can be chosen based on the outcomes $A_1 x,ldots, A_{i-1} x $ of previous measurements. The goal is to output a vector $hat{x}$ for which $$|x-hat{x}|_p le C cdot min_{ktext{-sparse } x} |x-x|_q,$$ with probability at least $2/3$, where $C > 0$ is an approximation factor. Indyk, Price and Woodruff (FOCS11) gave an algorithm for $p=q=2$ for $C = 1+epsilon$ with $Oh((k/epsilon) loglog (n/k))$ measurements and $Oh(log^*(k) loglog (n))$ rounds of adaptivity. We first improve their bounds, obtaining a scheme with $Oh(k cdot loglog (n/k) +(k/epsilon) cdot loglog(1/epsilon))$ measurements and $Oh(log^*(k) loglog (n))$ rounds, as well as a scheme with $Oh((k/epsilon) cdot loglog (nlog (n/k)))$ measurements and an optimal $Oh(loglog (n))$ rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for $(p,p)$ for every $0 < p < 2$. We show that the improvement from $O(k log(n/k))$ measurements to $O(k log log (n/k))$ measurements in the adaptive setting can persist with a better $epsilon$-dependence for other values of $p$ and $q$. For example, when $(p,q) = (1,1)$, we obtain $O(frac{k}{sqrt{epsilon}} cdot log log n log^3 (frac{1}{epsilon}))$ measurements.
Maximal independent set (MIS), maximal matching (MM), and $(Delta+1)$-coloring in graphs of maximum degree $Delta$ are among the most prominent algorithmic graph theory problems. They are all solvable by a simple linear-time greedy algorithm and up u ntil very recently this constituted the state-of-the-art. In SODA 2019, Assadi, Chen, and Khanna gave a randomized algorithm for $(Delta+1)$-coloring that runs in $widetilde{O}(nsqrt{n})$ time, which even for moderately dense graphs is sublinear in the input size. The work of Assadi et al. however contained a spoiler for MIS and MM: neither problems provably admits a sublinear-time algorithm in general graphs. In this work, we dig deeper into the possibility of achieving sublinear-time algorithms for MIS and MM. The neighborhood independence number of a graph $G$, denoted by $beta(G)$, is the size of the largest independent set in the neighborhood of any vertex. We identify $beta(G)$ as the ``right parameter to measure the runtime of MIS and MM algorithms: Although graphs of bounded neighborhood independence may be very dense (clique is one example), we prove that carefully chosen variants of greedy algorithms for MIS and MM run in $O(nbeta(G))$ and $O(nlog{n}cdotbeta(G))$ time respectively on any $n$-vertex graph $G$. We complement this positive result by observing that a simple extension of the lower bound of Assadi et.al. implies that $Omega(nbeta(G))$ time is also necessary for any algorithm to either problem for all values of $beta(G)$ from $1$ to $Theta(n)$. We note that our algorithm for MIS is deterministic while for MM we use randomization which we prove is unavoidable: any deterministic algorithm for MM requires $Omega(n^2)$ time even for $beta(G) = 2$.
Phase retrieval deals with the estimation of complex-valued signals solely from the magnitudes of linear measurements. While there has been a recent explosion in the development of phase retrieval algorithms, the lack of a common interface has made i t difficult to compare new methods against the state-of-the-art. The purpose of PhasePack is to create a common software interface for a wide range of phase retrieval algorithms and to provide a common testbed using both synthetic data and empirical imaging datasets. PhasePack is able to benchmark a large number of recent phase retrieval methods against one another to generate comparisons using a range of different performance metrics. The software package handles single method testing as well as multiple method comparisons. The algorithm implementations in PhasePack differ slightly from their original descriptions in the literature in order to achieve faster speed and improved robustness. In particular, PhasePack uses adaptive stepsizes, line-search methods, and fast eigensolvers to speed up and automate convergence.
We consider the problem of designing sublinear time algorithms for estimating the cost of a minimum metric traveling salesman (TSP) tour. Specifically, given access to a $n times n$ distance matrix $D$ that specifies pairwise distances between $n$ po ints, the goal is to estimate the TSP cost by performing only sublinear (in the size of $D$) queries. For the closely related problem of estimating the weight of a metric minimum spanning tree (MST), it is known that for any $varepsilon > 0$, there exists an $tilde{O}(n/varepsilon^{O(1)})$ time algorithm that returns a $(1 + varepsilon)$-approximate estimate of the MST cost. This result immediately implies an $tilde{O}(n/varepsilon^{O(1)})$ time algorithm to estimate the TSP cost to within a $(2 + varepsilon)$ factor for any $varepsilon > 0$. However, no $o(n^2)$ time algorithms are known to approximate metric TSP to a factor that is strictly better than $2$. On the other hand, there were also no known barriers that rule out the existence of $(1 + varepsilon)$-approximate estimation algorithms for metric TSP with $tilde{O}(n)$ time for any fixed $varepsilon > 0$. In this paper, we make progress on both algorithms and lower bounds for estimating metric TSP cost. We also show that the problem of estimating metric TSP cost is closely connected to the problem of estimating the size of a maximum matching in a graph.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا