English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  For-all Sparse Recovery in Near-optimal Time

Gilbert, A. C., Li, Y., Porat, E., & Strauss, M. J. (2014). For-all Sparse Recovery in Near-optimal Time. Retrieved from http://arxiv.org/abs/1402.1726.

Item is

Files

show Files
hide Files
:
arXiv:1402.1726.pdf (Preprint), 236KB
Name:
arXiv:1402.1726.pdf
Description:
File downloaded from arXiv at 2014-11-26 13:58
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Gilbert, Anna C.1, Author
Li, Yi2, Author           
Porat, Ely1, Author
Strauss, Martin J.1, Author
Affiliations:
1External Organizations, ou_persistent22              
2Algorithms and Complexity, MPI for Informatics, Max Planck Society, ou_24019              

Content

show
hide
Free keywords: Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Information Theory, cs.IT,Mathematics, Information Theory, math.IT
 Abstract: An approximate sparse recovery system in $\ell_1$ norm consists of parameters $k$, $\epsilon$, $N$, an $m$-by-$N$ measurement $\Phi$, and a recovery algorithm, $\mathcal{R}$. Given a vector, $\mathbf{x}$, the system approximates $x$ by $\widehat{\mathbf{x}} = \mathcal{R}(\Phi\mathbf{x})$, which must satisfy $\|\widehat{\mathbf{x}}-\mathbf{x}\|_1 \leq (1+\epsilon)\|\mathbf{x}-\mathbf{x}_k\|_1$. We consider the 'for all' model, in which a single matrix $\Phi$, possibly 'constructed' non-explicitly using the probabilistic method, is used for all signals $\mathbf{x}$. The best existing sublinear algorithm by Porat and Strauss (SODA'12) uses $O(\epsilon^{-3} k\log(N/k))$ measurements and runs in time $O(k^{1-\alpha}N^\alpha)$ for any constant $\alpha > 0$. In this paper, we improve the number of measurements to $O(\epsilon^{-2} k \log(N/k))$, matching the best existing upper bound (attained by super-linear algorithms), and the runtime to $O(k^{1+\beta}\textrm{poly}(\log N,1/\epsilon))$, with a modest restriction that $\epsilon \leq (\log k/\log N)^{\gamma}$, for any constants $\beta,\gamma > 0$. When $k\leq \log^c N$ for some $c>0$, the runtime is reduced to $O(k\textrm{poly}(N,1/\epsilon))$. With no restrictions on $\epsilon$, we have an approximation recovery system with $m = O(k/\epsilon \log(N/k)((\log N/\log k)^\gamma + 1/\epsilon))$ measurements.

Details

show
hide
Language(s): eng - English
 Dates: 2014-02-072014-02-07
 Publication Status: Published online
 Pages: 22 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1402.1726
URI: http://arxiv.org/abs/1402.1726
BibTex Citekey: Li_arXiv2014
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show