Training Backpropagation Neural Network in MapReduce
- DOI
- 10.2991/ccit-14.2014.7How to use a DOI?
- Keywords
- MapReduce, backprogation, neural network, intermediate data.
- Abstract
BP neural network is generally serially trained by one machine. But massive training data makes the process slow, costing too much system resources. For these problems, one effective solution is to use the MapReduce framework to do the distributed training. Some methods have been proposed, but it is still very slow when facing the neural network with complex structure. This paper presents a new method for BP neural network training based on MapReduce, MR-TMNN (MapReduce based Training in Mapper Neural Network). This method puts most of the training process into Mappers, and then emits the variations of weights and thresholds to Reducer process to do the batch update. It can effectively reduce the volume of intermediate data created by Mappers, reducing the cost of I/O, thereby accelerating training speed. Experimental results show that MR-TMNN has a better convergence without losing too much accuracy, comparing with conventional training method, and it still performs well with the complexity of neural network structure increasing.
- Copyright
- © 2014, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Binhan Zhou AU - Wenjun Wang AU - Xiangfeng Zhang PY - 2014/01 DA - 2014/01 TI - Training Backpropagation Neural Network in MapReduce BT - Proceedings of the 2014 International Conference on Computer, Communications and Information Technology PB - Atlantis Press SP - 22 EP - 25 SN - 1951-6851 UR - https://doi.org/10.2991/ccit-14.2014.7 DO - 10.2991/ccit-14.2014.7 ID - Zhou2014/01 ER -