Yinzhan Xu (Theory Seminar)

Shaving Logs via Large Sieve Inequality: Faster Algorithms for Sparse Convolution and More

Yinzhan Xu (UCSD)
Monday, February 10th 2025, 2-3pm

 

Abstract:

In sparse convolution-type problems, a common technique is to hash the input integers modulo a random prime p ∈ [Q/2,Q] for some parameter Q, which reduces the range of the input integers while preserving their additive structure. However, this hash family suffers from two drawbacks, which led to bottlenecks in many state-of-the-art algorithms: (1) The collision probability of two elements from [N] is O(log(N)/Q) rather than O(1/Q); (2) It is difficult to derandomize the choice of p; known derandomization techniques lead to super-logarithmic overhead [Chan, Lewenstein STOC'15].

We partially overcome these drawbacks in certain scenarios, via novel applications of the large sieve inequality from analytic number theory. Consequently, we obtain the following improved algorithms for various problems (in the standard word RAM model):

- Sparse Nonnegative Convolution: We obtain an O(t log t)-time Las Vegas algorithm that computes the convolution A ∗ B of two nonnegative integer vectors A, B, where t is the output sparsity |A ∗ B|_0. Moreover, our algorithm terminates in O(t log t) time with 1-1/poly(t) probability.

- Text-to-Pattern Hamming Distances: Given a length-m pattern P and a length-n text T, we obtain a deterministic O(n sqrt(m log log m))-time algorithm that exactly computes the Hamming distance between P and every length-m substring of T.

- Sparse General Convolution: We also give a Monte Carlo O(t log t) time algorithm for sparse convolution with possibly negative input in the restricted case where the length N of the input vectors satisfies N ≤ t^{1.99}.

Joint work with Ce Jin.