An Improved Caching Strategy for Training
- DOI
- 10.2991/iske.2007.107How to use a DOI?
- Keywords
- support vector machine, working set selection, shrinking, caching,kernel evaluation, sequential minimal optimization
- Abstract
Computational complexity is one of the most important issues while dealing with the training of Support Vector Machines(SVMs), which is done by solving corresponding linear constrained convex quadratic programming problems. The state-ofthe- art training of SVMs takes iterative decomposition strategies that focus on working-set selection to solve quadratic programming problems. Shrinking and caching are two indispensable strategies to reduce the complexity of the decomposition process. Yet, most existing caching strategies mainly consider usage records of samples, while ignoring probabilities of samples being selected into working sets. These probabilities might determine the efficiency of caching. This paper proposes an improved caching strategy by taking into account these probabilities of samples being selected into working sets, to reduce computational costs of kernel evaluations in the training of SVMs. Experiments on several benchmark data sets show that our caching strategy is more efficient than those existing ones.
- Copyright
- © 2007, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Liang Zhou AU - Fen Xia AU - Yanwu Yang PY - 2007/10 DA - 2007/10 TI - An Improved Caching Strategy for Training BT - Proceedings of the 2007 International Conference on Intelligent Systems and Knowledge Engineering (ISKE 2007) PB - Atlantis Press SP - 623 EP - 629 SN - 1951-6851 UR - https://doi.org/10.2991/iske.2007.107 DO - 10.2991/iske.2007.107 ID - Zhou2007/10 ER -