previous home search |
LaTeX -
PostScript -
PDF -
Html/Gif
| contact up next |

## Adaptive Online Prediction by Following the Perturbed Leader

Authors:Marcus Hutter and Jan Poland (2004-2005) Comments:25 pages Subj-class:Artificial Intelligence; Learning Reference:Journal of Machine Learning Research, 6 (2005) 639--660 Report-no:IDSIA-10-05 and cs.AI/0504078 Paper:LaTeX - PostScript - PDF - Html/Gif Slides:PostScript - PDF

Keywords:Prediction with Expert Advice, Follow the Perturbed Leader, general weights, adaptive learning rate, adaptive adversary, hierarchy of experts, expected and high probability bounds, general alphabet and loss, online sequential prediction.

Abstract:When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of sqrt(complexity/current loss) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from Kalai & Vempala (2003) (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are new.

previous home search |
LaTeX -
PostScript -
PDF -
Html/Gif
| contact up next |

@Article{Hutter:05expertx, author = "M. Hutter and J. Poland", title = "Adaptive Online Prediction by Following the Perturbed Leader", volume = "6", month = apr, year = "2005", pages = "639--660", journal = "Journal of Machine Learning Research", publisher = "Microtome", http = "http://www.hutter1.net/ai/expertx.htm", url = "http://arxiv.org/abs/cs.AI/0504078", url2 = "http://www.jmlr.org/papers/v6/hutter05a.html", ftp = "ftp://ftp.idsia.ch/pub/techrep/IDSIA-10-05.pdf", keywords = "Prediction with Expert Advice, Follow the Perturbed Leader, general weights, adaptive learning rate, adaptive adversary, hierarchy of experts, expected and high probability bounds, general alphabet and loss, online sequential prediction.", abstract = "When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of sqrt(complexity/current loss) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative ``Follow the Perturbed Leader'' (FPL) algorithm from Kalai & Vempala (2003) (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are new.", }

previous home search |
LaTeX -
PostScript -
PDF -
Html/Gif
| contact up next |