English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Sparse Nonnegative Convolution Is Equivalent to Dense Nonnegative Convolution

MPS-Authors
/persons/resource/persons44182

Bringmann,  Karl       
Algorithms and Complexity, MPI for Informatics, Max Planck Society;

/persons/resource/persons242908

Fischer,  Nick
Algorithms and Complexity, MPI for Informatics, Max Planck Society;

/persons/resource/persons251391

Nakos,  Vasileios
Algorithms and Complexity, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2105.05984.pdf
(Preprint), 711KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Bringmann, K., Fischer, N., & Nakos, V. (2021). Sparse Nonnegative Convolution Is Equivalent to Dense Nonnegative Convolution. Retrieved from https://arxiv.org/abs/2105.05984.


Cite as: https://hdl.handle.net/21.11116/0000-0008-E263-9
Abstract
Computing the convolution $A\star B$ of two length-$n$ vectors $A,B$ is an
ubiquitous computational primitive. Applications range from string problems to
Knapsack-type problems, and from 3SUM to All-Pairs Shortest Paths. These
applications often come in the form of nonnegative convolution, where the
entries of $A,B$ are nonnegative integers. The classical algorithm to compute
$A\star B$ uses the Fast Fourier Transform and runs in time $O(n\log n)$.
However, often $A$ and $B$ satisfy sparsity conditions, and hence one could
hope for significant improvements. The ideal goal is an $O(k\log k)$-time
algorithm, where $k$ is the number of non-zero elements in the output, i.e.,
the size of the support of $A\star B$. This problem is referred to as sparse
nonnegative convolution, and has received considerable attention in the
literature; the fastest algorithms to date run in time $O(k\log^2 n)$.
The main result of this paper is the first $O(k\log k)$-time algorithm for
sparse nonnegative convolution. Our algorithm is randomized and assumes that
the length $n$ and the largest entry of $A$ and $B$ are subexponential in $k$.
Surprisingly, we can phrase our algorithm as a reduction from the sparse case
to the dense case of nonnegative convolution, showing that, under some mild
assumptions, sparse nonnegative convolution is equivalent to dense nonnegative
convolution for constant-error randomized algorithms. Specifically, if $D(n)$
is the time to convolve two nonnegative length-$n$ vectors with success
probability $2/3$, and $S(k)$ is the time to convolve two nonnegative vectors
with output size $k$ with success probability $2/3$, then
$S(k)=O(D(k)+k(\log\log k)^2)$.
Our approach uses a variety of new techniques in combination with some old
machinery from linear sketching and structured linear algebra, as well as new
insights on linear hashing, the most classical hash function.