Deep Convolutional Neural Networks: Structure, Feature Extraction and Training
Abstract
Deep convolutional neural networks (CNNs) are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.
Keywords: |
Convolution layers; convolution operation; deep convolutional neural networks; feature extraction
|
Full Text: |
References
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (Adaptive Competition and Machine Learning). The MIT Press, p. 779, 2016.
T. Munakata, Fundamentals of the New Artificial Intelligence: Neural, Evolutionary, Fuzzy and More, 2nd Edition. Springer-Verlag, London. p. 225, 2008.
D. Floreano, P. Dürr, and C. Mattiussi, “Neuroevolution: From Architectures to Learning,” Evolutionary Intelligence, vol. 1, no. 1, pp. 47–62, Jan. 2008. https://doi.org/10.1007/s12065-007-0002-4
A. Prieto, M. Atencia, and F. Sandoval, “Advances in Artificial Neural Networks and Machine Learning,” Neurocomputing, vol. 121, pp. 1–4, Dec. 2013. https://doi.org/10.1016/j.neucom.2013.01.008
M. Dalto, “Deep Neural Networks for Time Series Prediction with Application in Ultra-Short-Term Wind Forecasting,” IEEE, pp. 1657– 1663, 2015.
A. Ferreira and G. Giraldi, “Convolutional Neural Network Approaches to Granite Tiles Classification,” Expert Systems with Applications, vol. 84, pp. 1–11, Oct. 2017. https://doi.org/10.1016/j.eswa.2017.04.053
Y. Bengio, “Learning Deep Architectures for AI,” Foundations and Trends® in Machine Learning, vol. 2, no. 1, pp. 1–127, 2009. https://doi.org/10.1561/2200000006
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” In proceedings of Neural Networks (NIPS), Nevada, USA, pp. 1106–1114, 2012.
K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-scale Image Recognition,” Published as a conference paper at ICLR, Cornel University Library, 2015.
A. Conneau, H. Schwenk, L. Barrault, and Y. Lecun, “Very Deep Convolutional Networks for Text Classification,” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, vol. 1, Long Papers, 2017. https://doi.org/10.18653/v1/e17-1104
Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao, “Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks,” Lecture Notes in Computer Science, pp. 298–310, 2014. https://doi.org/10.1007/978-3-319-08010-9_33
U. R. Acharya, H. Fujita, S. L. Oh, Y. Hagiwara, J. H. Tan, and M. Adam, “Application of Deep Convolutional Neural Network for Automated Detection of Myocardial Infarction Using ECG Signals,” Information Sciences, vol. 415–416, pp. 190–198, Nov. 2017. https://doi.org/10.1016/j.ins.2017.06.027
K. Wang, Y. Zhao, Q. Xiong, M. Fan, G. Sun, L. Ma, and T. Liu, “Research on Healthy Anomaly Detection Model Based on Deep Learning from Multiple Time-Series Physiological Signals,” Scientific Programming, vol. 2016, pp. 1–9, 2016. http://dx.doi.org/10.1155/2016/5642856
R. Liu, G. Meng, B. Yang, C. Sun, and X. Chen, “Dislocated Time Series Convolutional Neural Architecture: An Intelligent Fault Diagnosis Approach for Electric Machine,” IEEE Transactions on Industrial Informatics, vol. 13, no. 3, pp. 1310–1320, Jun. 2017. https://doi.org/10.1109/tii.2016.2645238
M. Meng, Y. J. Chua, E. Wouterson, and C. P. K. Ong, “Ultrasonic Signal Classification and Imaging System for Composite Materials via Deep Convolutional Neural Networks,” Neurocomputing, vol. 257, pp. 128– 135, Sep. 2017. https://doi.org/10.1016/j.neucom.2016.11.066
C. Affonso, A. L. D. Rossi, F. H. A. Vieira, and A. C. P. de L. F. de Carvalho, “Deep Learning for Biological Image Classification,” Expert Systems with Applications, vol. 85, pp. 114–122, Nov. 2017. https://doi.org/10.1016/j.eswa.2017.05.039
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrana, and T. Darell, “Caffe: Convolutional Architecture for Fast Feature Embedding”, Cornel University Library, Jun. 2014.
D. H. Hubel and T. N. Wiesel, “Receptive Fields and Functional Architecture of Monkey Striate Cortex,” The Journal of Physiology, vol. 195, no. 1, pp. 215–243, Mar. 1968. https://doi.org/10.1113/jphysiol.1968.sp008455
I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, “Maxout networks,” S. Dasgupta and D. McAllester, eds., ICML’13, pp. 1319–1327, 2013.
K. Fukushima, “Neocognitron. “A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position”, Biological Cybernetics, vol. 36, no. 4, pp. 193–202, 1980.
J. Schmidhuber, “Deep Learning in Neural Networks: An Overview,” Neural Networks, vol. 61, pp. 85–117, Jan. 2015. https://doi.org/10.1016/j.neunet.2014.09.003
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Representations by Back-Propagating Errors,” Nature, vol. 323, no. 6088, pp. 533–536, Oct. 1986. https://doi.org/10.1038/323533a0
Y. LeCun, “Generalization and Network Design Strategies”, Technical Report, University of Toronto, 1989.
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551, Dec. 1989. https://doi.org/10.1162/neco.1989.1.4.541
T. Robinson and F. Fallside, “A Recurrent Error Propagation Network Speech Recognition System,” Computer Speech & Language, vol. 5, no. 3, pp. 259–274, Jul. 1991. https://doi.org/10.1016/0885-2308(91)90010-n
Y. Bengio, R. De Mori, G. Flammia, and R. Kompe, “Phonetically Motivated Acoustic Parameters for Continuous Speech Recognition Using Artificial Neural Networks,” Speech Communication, vol. 11, no. 2–3, pp. 261–271, Jun. 1992. https://doi.org/10.1016/0167-6393(92)90020-8
G. E. Hinton, “To Recognize Shapes, First Learn to Generate Images,” Technical Report UTML TR 2006-003, University of Toronto, 2006.
K. Chellapilla, S. Puri, and P. Simard, “High Performance Convolutional Neural Networks for Document Processing,” Tenth International Workshop on Frontiers in Handwriting Recognition, La Baule (France), Université de Rennes 1, Suvisoft, 2006.
K.-S. Oh and K. Jung, “GPU Implementation of Neural Networks,” Pattern Recognition, vol. 37, no. 6, pp. 1311–1314, Jun. 2004. https://doi.org/10.1016/j.patcog.2004.01.013
S.-H. Zhong, J. Wu, Y. Zhu, P. Liu, J. Jiang, and Y. Liu, “Visual Orientation Inhomogeneity Based Convolutional Neural Networks,” 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), Nov. 2016. https://doi.org/10.1109/ictai.2016.0079
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. https://doi.org/10.1109/5.726791
S. Albelwi and A. Mahmood, “A Framework for Designing the Architectures of Deep Convolutional Neural Networks,” Entropy, vol. 19, no. 6, p. 242, May 2017. https://doi.org/10.3390/e19060242
S. Krig, Computer Vision Metrics. Survey, Taxonomy and Analysis of Computer Vision, Visual Neuroscience, and Deep Learning. Springer, p. 637, 2016. https://doi.org/10.1007/978-3-319-33762-3
J. S. Ren, W. Wang, J. Wang, and S. Liao, “An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection,” 2012 15th International IEEE Conference on Intelligent Transportation Systems,
Sep. 2012. https://doi.org/10.1109/itsc.2012.6338621
C. Affonso, A. D. Rossi, F. H. A. Vieira, and A. C. P. de L. F. de Carvalho, “Deep Learning for Biological Image Classification”, Expert Systems with Applications, vol. 85, pp. 114–122, 2017.
T. Chen, R. Y. He, and X. Wang, “A Gloss Composition and Context Clustering Based Distributed Word Sense Representation Model,” Entropy, vol. 17, no. 9, pp. 6007–6024, Aug. 2015. https://doi.org/10.3390/e17096007
J. Bouvrie, Notes on Convolutional Neural Networks, Nov. 2006 [Online]. Available: http://cogprints.org/5869/1/cnn_tutorial.pdf
I. Song, H.-J. Kim, and P. B. Jeon, “Deep Learning for Real-Time Robust Facial Expression Recognition on a Smartphone,” 2014 IEEE International Conference on Consumer Electronics (ICCE), Jan. 2014. https://doi.org/10.1109/icce.2014.6776135
H. Tabia and H. Laga, “Learning Shape Retrieval from Different Modalities,” Neurocomputing, vol. 253, pp. 24–33, Aug. 2017. https://doi.org/10.1016/j.neucom.2017.01.101
U. R. Acharya, H. Fujita, O. S. Lih, Y. Hagiwara, J. H. Tan, and M. Adam, “Automated Detection of Arrhythmias Using Different Intervals of Tachycardia ECG Segments with Convolutional Neural Network,” Information Sciences, vol. 405, pp. 81–90, Sep. 2017. https://doi.org/10.1016/j.ins.2017.04.012
U. R. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan, and H. Adeli, “Deep Convolutional Neural Network for the Automated Detection and Diagnosis of Seizure Using EEG Signals,” Computers in Biology and Medicine, Sep. 2017. https://doi.org/10.1016/j.compbiomed.2017.09.017
S. Guzel Aydin, T. Kaya, and H. Guler, “Wavelet-Based Study of Valence–Arousal Model of Emotions on EEG Signals with LabVIEW,” Brain Informatics, vol. 3, no. 2, pp. 109–117, Jan. 2016. https://doi.org/10.1007/s40708-016-0031-9
Refbacks
- There are currently no refbacks.
Copyright (c) 2017 Ivars Namatēvs
This work is licensed under a Creative Commons Attribution 4.0 International License.