Multi-pretraining Deep Neural Network by DBN and SDA
- DOI
- 10.2991/ceis-16.2016.11How to use a DOI?
- Keywords
- pretrain; deep belief network; stacked de-noising auto-encoder
- Abstract
Pretraining is widely used in deep neutral network and one of the most famous pretraining models is Deep Belief Network (DBN) which is composed by stacking Restricted Boltzmann Machine (RBM). The optimization formulas are different during the pretraining process for different pretraining models. In this paper, we pretrained deep neutral network by different pretraining models and hence investigated the difference between DBN and Stacked De-noising Auto-encoder (SDA) composed by stacking De-noising Auto-encoder (DA) when used as pretraining model. The experimental results show that DBN get a better initial model. However the model converges to a relatively worse model after the finetuning process. Yet after pretrained by SDA for the second time the model converges to a better model if finetuned.
- Copyright
- © 2017, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Zhen Hu AU - Zhu-Yin Xue AU - Tong Cui AU - Shi-Qiang Zong AU - Cheng-Long He PY - 2016/11 DA - 2016/11 TI - Multi-pretraining Deep Neural Network by DBN and SDA BT - Proceedings of the 2016 International Conference on Computer Engineering and Information Systems PB - Atlantis Press SP - 52 EP - 55 SN - 2352-538X UR - https://doi.org/10.2991/ceis-16.2016.11 DO - 10.2991/ceis-16.2016.11 ID - Hu2016/11 ER -