Russell Impagliazzon (Theory Seminar)

"A Unified Approach to Smooth Boosting"
Russell Impagliazzo (UC San Diego)
Monday, October 21, 2019, 2:00 pm
EBU3B, Room 4258


Abstract: Boosting is a technique to leverage a weak learner, one that returns a hypothesis $h$ with just a small correlation to $f$, to create a strong learner, one whose final error probability is small. The first boosting algorithm is due to \cite{Schapire90}.  Since then, there have been a wide variety of boosting algorithms introduced for a wide variety of application domains, with Freund and Schapire's AdaBoost algorithm (\cite{FreundSchapire97}) one of the most prominent, and also optimal in round complexity.  Smoothness is an additional property that a boosting algorithm can have, introduced by Servedio (\cite{Servedio03}) to create learning algorithms that work even in the face of malicious noise.  While AdaBoost is not smooth, Servedio showed related boosting algorithms that are. 

Klivans and Servedio (\cite{KS03}) showed that there is a tight connection between smooth boosting algorithms and the hard-core distribution lemma from complexity (\cite{I95}).  They observed that the constructive proof of the hard-core distribution lemma implicitly gave a smooth boosting algorithm.  Holenstein's tight version of the hard-core distribution lemma (\cite{Hol05})lso implicitly contains a  smooth boosting algorithm (which we will make explicit here).  Holenstein's algorithm is optimal in terms of the error vs. smoothness relationship, but not in terms of the round complexity.  Barak, Hardt and Kale (\cite{BHK09} )  give a more complicated boosting algorithm using Bregman projections that is optimal with respect to both smoothness and round complexity.

We  give a simplified algorithm, combining the approaches of \cite{Hol05, BHK09} that also achieves this optimal trade-off between all three parameters.  We give a sample complexity analysis of our algorithm.  Another advantage is that the new algorithm can easily be made adaptive, taking advantage of better than guaranteed weak learner performance.