Empirical Comparisons of Online Boosting Algorithms
- DOI
- 10.2991/icmra-15.2015.74How to use a DOI?
- Keywords
- boosting; ensemble learning;online learning; accuracy; running time
- Abstract
Boosting is an effective classifier combination method, which can improve classification performance of an unstable learning algorithm due to its theoretical performance guarantees and strong experimental results. However, the algorithm has been used mainly in batch mode, i.e., it requires the entire training set to be available at once and, in some cases, require random access to the data. Recently, Nikunj C.oza(2001) proved that some preliminary theoretical results and some empirical comparisons of the classification accuracies of online algorithms with their corresponding batch algorithms on many datasets. In this paper, we present online versions of some boosting methods that require only one pass through the training data. Specifically, we discuss how our online algorithms mirror the techniques that boosting use to generate multiple distinct base models. We also present theoretical and experimental evidence that our online algorithms succeed in this mirroring. Our online algorithms are demonstrated to be more practical with larger datasets. We also compare the online and batch algorithms experimentally in terms of accuracy .
- Copyright
- © 2015, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Xiaowei Sun PY - 2015/04 DA - 2015/04 TI - Empirical Comparisons of Online Boosting Algorithms BT - Proceedings of the 3rd International Conference on Mechatronics, Robotics and Automation PB - Atlantis Press SP - 375 EP - 380 SN - 2352-538X UR - https://doi.org/10.2991/icmra-15.2015.74 DO - 10.2991/icmra-15.2015.74 ID - Sun2015/04 ER -