A Bayesian Approach for Online Classifier EnsembleWe propose a Bayesian approach for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our approach estimates the weights by recursively updating its posterior distribution. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our approach admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. For more details ICML'14 paper, full version, Slides. Downloads
[Code*] [Dataset* + Result*]
[BibTex] Online Ensemble Learning
A Stochastic Optimization Problem
MotivationTwo classical results in Bayesian statistics
Main resultsA Bayesian scheme for online classsifier ensemble
|