Improving Trained LS--SVM Performance with New Available Data
- DOI
- 10.2991/iske.2007.114How to use a DOI?
- Keywords
- LS-SVM, concept updating, learning
- Abstract
Learning is obtaining an underlying rule by using training data sampled from the environment. In many practical situations in inductive learning algorithms, it is often expected to further improve the generalization capability after the learning process has been completed if new data are available. One of the common approaches is to add training data to the learning algorithm and retrain it, but retraining for each new data point or data set can be very expensive. In view of the learning methods of human beings, it seems natural to build posterior learning results upon prior results. Firstly, in this paper, we proposed an updating procedure for least square support vector machine(LS--SVM). If initial concept would be built up by LS--SVM inductive algorithm, then concept updated is the normal solution corresponding to the initial concept learned. Secondly, we discuss a general framework for updating learned concept. Finally, we illustrate the updating method and evaluate it on toy data and real data, their results show that the performance after updating is improved and almost equal to the performance of LS--SVM retrained on whole data.
- Copyright
- © 2007, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Yangguang Liu AU - Bin Xu AU - Jun Liu PY - 2007/10 DA - 2007/10 TI - Improving Trained LS--SVM Performance with New Available Data BT - Proceedings of the 2007 International Conference on Intelligent Systems and Knowledge Engineering (ISKE 2007) PB - Atlantis Press SP - 667 EP - 671 SN - 1951-6851 UR - https://doi.org/10.2991/iske.2007.114 DO - 10.2991/iske.2007.114 ID - Liu2007/10 ER -