Convolutional neural networks (CNNs) are typical structures for deep learning and are widely used in image recognition and classification. However, the random initialization strategy tends to become stuck at local plateaus or even diverge, which results in rather unstable and ineffective solutions in real applications. To address this limitation, we propose a hybrid deep learning CNN-AdapDAE model, which applies the features learned by the AdapDAE algorithm to initialize CNN filters and then train the improved CNN for classification tasks. In this model, AdapDAE is proposed as a CNN pre-training procedure, which adaptively obtains the noise level based on the principle of annealing, by starting with a high level of noise and lowering it as the training progresses. Thus, the features learned by AdapDAE include a combination of features at different levels of granularity. Extensive experimental results on STL-10, CIFAR-10, andMNIST datasets demonstrate that the proposed algorithm performs favorably compared to CNN (random filters), CNNAE (pre-training filters by autoencoder), and a few other unsupervised feature learning methods.
Salakhutdinov R, Larochelle H. Efficient learning of deep Boltzmann machines. Research Gate, 2010, 9(8): 693–700
3
LeCun Y, Boser B, Denker J S, Henderson D, Howard R E, Hubbard W, Jackel L D. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1989, 1(4): 541–551 https://doi.org/10.1162/neco.1989.1.4.541
4
Tan S Q, Li B. Stacked convolutional auto-encoders for steganalysis of digital images. In: Proceedings of Asia-Pacific Conference on Signal and Information Processing Association. 2014, 1–4 https://doi.org/10.1109/APSIPA.2014.7041565
5
Erhan D, Bengio Y, Courville A, Manzagol P A, Vincent P, Bengio S. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 2010, 11(3): 625–660
6
Bengio Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009, 2(1): 1–127 https://doi.org/10.1561/2200000006
7
Masci J, Meier U, Ciresan D, Schmidhuber J. Stacked convolutional auto-encoders for hierarchical feature extraction. In: Proceedings of the 21st International Conference on Artificial Neural Networks. 2011, 52–59 https://doi.org/10.1007/978-3-642-21735-7_7
8
Lee H, Grosse R, Ranganath R, Ng A Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of International Conference on Machine Learning. 2009, 609–616 https://doi.org/10.1145/1553374.1553453
9
Ji M Q, Fang L, Zheng H T, Strese M, Steinbach E. Preprocessing-free surface material classification using convolutional neural networks pretrained by sparse Autoencoder. In: Proceedings of the 25th IEEE International Workshop on Machine Learning for Signal Processing. 2015 https://doi.org/10.1109/MLSP.2015.7324324
10
Coates A, Ng A Y, Lee H. An analysis of single-layer networks in unsupervised feature learning. Journal of Machine Learning Research, 2011, 15: 215–223
11
Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Technical Report, 2009
12
Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504–507 https://doi.org/10.1126/science.1127647
13
Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol P A. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 2010, 11(6): 3371–3408
14
Olshausen B A, Field D J. Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision Research, 1997, 37(23): 3311–3325 https://doi.org/10.1016/S0042-6989(97)00169-7
15
Ranzato M, Boureau Y L, Lecun Y. Sparse feature learning for deep belief networks. Advances in Neural Information Processing Systems, 2007, 1185–1192
16
Lee H, Ekanadham C, Ng A Y. Sparse deep belief net model for visual area V2. Advances in Neural Information Processing Systems, 2008, 20: 873–880
17
Dahl J V, Koch K C, Kleinhans E, Ostwald E, Schulz G, Buell U, Hanrath P. Convolutional networks and applications in vision. In: Proceedings of IEEE International Symposium on Circuits and Systems. 2010, 253–256
18
Agarwal A, Triggs B. Hyperfeatures- multilevel local coding for visual recognition. In: Proceedings of European Conference on Computer Vision. 2006, 30–43 https://doi.org/10.1007/11744023_3
19
Geras K J, Sutton C. Scheduled denoising autoencoders. 2014, arXiv preprint arXiv:1406.3269
20
Chandra B, Sharma R K. Adaptive noise schedule for denoising autoencoder. In: Proceedings of International Conference on Neural Information Processing. 2014, 535–542 https://doi.org/10.1007/978-3-319-12637-1_67
21
Coates A, Ng A Y. Selecting receptive fields in deep networks. Advances in Neural Information Processing Systems, 2011, 2528–2536
22
Hui K Y. Direct modeling of complex invariances for visual object features. In: Proceedings of International Conference on Machine Learning. 2013, 352–360
23
Dosovitskiy A, Springenberg J T, Riedmiller M, Brox T. Discriminative unsupervised feature learning with convolutional neural networks. Advances in Neural Information Processing Systems, 2014, 766–774
24
Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278–2324 https://doi.org/10.1109/5.726791
25
Krizhevsky A. Convolutional deep belief networks on CIFAR-10. Technical Report, 2010