Russell Implagliazzo (UCSD)
Monday, April 24, 2017, 2:00pm
Title: Algorithmic Dense Model Theorems, Generative Adversarial Networks and Regularity
Green and Tao ([GT04]) used the existence of a dense subset indistinguishable from the primes under certain tests from a certain class to prove the existence of arbitrarily long prime arithmetic progressions. Tao and Ziegler ([TZ06]) showed some general conditions under which such a model exists. In [RTTV08], a quantitatively improved characterization was obtained using an argument based on Nisan’s proof of the Impagliazzo hard-core set theorem ([I95]) from computational complexity. Gowers ([Gow08]) independently obtained a similar improvement. Also independently, Stefan Dziembowski and Krysztof Pietrzak (FOCS 2007) gave a similar result for a different context.
We show that the existence of dense models can be reduced directly to the improved hardcore distribution results of Holenstein ([H05]). Using Holenstein’s uniform proof of an optimal density hard-core set theorem, we show that the dense models that one derives have a canonical form, with models being (sampleable from) functions defined in terms of tests from the original class.
One can view this reduction also in the context of learning theory. Here, we get a generic way to transform a boosting algorithm for binary classifiers into an algorithm that learns to produce samples from a distribution in the style of generative adversarial networks. We can use this transformation to get formal guarantees on GAN style algorithms, such as finding the maximum entropy indistinguishable distribution.
We can also use this connection in the context of regularity lemmas, such as the Szemeredi regularity lemma for graph cuts, the Gowers algebraic regularity lemma, and the Frieze-Kannan regularity lemma, as well as generalizations. Using recursive applications of the dense model theorem, we can give quantitatively improved regularity lemmas.
This talk will just give an overview, and will avoid most technical details.