Matrix Neo-Fuzzy-System and its Online Learning in Image Recognition Task

Olha Chala, Yevgeniy Bodyanskiy

Abstract


The paper proposes a 2D-hybrid system of computational intelligence, which is based on the generalized neo-fuzzy neuron. The system is characterised by high approximate abilities, simple computational implementation, and high learning speed. The characteristic property of the proposed system is that on its input the signal is fed not in the traditional vector form, but in the image-matrix form. Such an approach allows getting rid of additional convolution-pooling layers that are used in deep neural networks as an encoder. The main elements of the proposed system are a fuzzified multidimensional bilinear model, additional softmax layer, and multidimensional generalized neo-fuzzy neuron tuning with cross-entropy criterion. Compared to deep neural systems, the proposed matrix neo-fuzzy system contains gradually fewer tuning parameters – synaptic weights. The usage of the time-optimal algorithm for tuning synaptic weights allows implementing learning in an online mode.

 


Keywords:

2D signals; generalized neo-fuzzy neuron; image recognition; membership functions data; optimal learning algorithm; stream mining

Full Text:

PDF

References


P. Daniušis and P. Vaitkus, “Neural network with matrix inputs,” Informatica, vol. 19, no. 4, pp. 477–486, Jan. 2008. https://doi.org/10.15388/Informatica.2008.225

P. Stubberud, “A vector matrix real time backpropagation algorithm for recurrent neural networks that approximate multi-valued periodic functions,” International Journal on Computational Intelligence and Application, vol. 8, no. 4, pp. 395–411, Dec. 2009. https://doi.org/10.1142/S1469026809002667

M. Mohamadian, H. Afarideh, and F. Babapour, “New 2D matrix-based neural network for image processing applications,” IAENG International Journal of Computer Science, vol. 42, no. 3, pp. 265–274, 2015.

J. Yao, Y. Guo, and Z. Wang, “Matrix neural networks,” Lecture Notes in Computer Science, vol. 10261, May 2017, pp. 313–320. https://doi.org/10.1007/978-3-319-59072-1_37

J. Pliss, O. Boiko, V. Volkova, and Y. Bodyanskiy, “Matrix deep neural network and its rapid learning in Data Science tasks",” Proceedings of the International Conference “Advanced Computer Information Technologies,” Czech Republic, June 2018, pp. 141–144.

D. F. Specht, “Probabilistic neural networks,” Neural Networks, vol. 3., no. 1, pp. 109–118, 1990. https://doi.org/10.1016/0893-6080(90)90049-Q

Ye. Bodyanskiy, A. Deineko, I. Pliss, O. Chala, and A. Nortsova, “Matrix fuzzy-probabilistic neural network in image recognition task,” 2020 IEEE Third International Conference on Data Stream Mining and Processing (DSMP), Lviv, Ukraine, Aug. 2020, pp. 33–36.https://doi.org/10.1109/DSMP47368.2020.9204236

J. Yamakawa, E. Uchino, J. Miki, and H. Kusanagi, “A neo-fuzzy neuron and its application to system identification and prediction of the system behavior,” Proceedings of the 2nd International Conference on Fuzzy Logic & Neural Networks, Iizuka, Japan, July 1992.

E. Uchino and T. Yamakawa, “Neo-fuzzy-neuron based new approach to system modeling, with application to actual system,” Proceedings Sixth International Conference on Tools with Artificial Intelligence, TAI 94, New Orleans, LA, USA, Nov. 1994. pp. 564–570.https://doi.org/10.1109/tai.1994.346442

T. Miki, “Analog implementation of neo-fuzzy neuron and its on-board learning,” in IMACS/IEEE CSCC'99 Proceedings, pp. 4401–4406, 1999.

A. M. Silva, W. Caminhas, A. Lemos, and F. Gomide, “A fast learning algorithm for evolving neo-fuzzy neuron,” Applied Soft Computing, vol. 14, Part B, pp. 194–209, Jan. 2014.https://doi.org/10.1016/j.asoc.2013.03.022

D. Zurita, M. Delgado, J. A. Carino, J. A. Ortega, and G. Clerc, “Industrial time series modelling by means of the neo-fuzzy neuron,” IEEE Access, vol. 4, pp. 6151–6160, Sept. 2016.https://doi.org/10.1109/ACCESS.2016.2611649

Y. Bodyanskiy, A. Deineko, I. Pliss, and O. Chala, “Fast probabilistic neuro-fuzzy system for pattern classification task,” Information Technology and Management Science, vol. 23, no. 1, pp. 12–16, Dec. 2020. https://doi.org/10.7250/itms-2020-0002

Y. Bodyanskiy, O. Vynokurova, V. Volkova, and O. Boiko, “2D-Neo-Fuzzy Neuron and Its Adaptive Learning,” Information Technology and Management Science, vol. 21, no. 1, pp. 24–28, Dec. 2018.https://doi.org/10.7250/itms-2018-0003

Y. Bodyanskiy and T. Antonenko, “Deep neo-fuzzy neural network and its accelerated learning,” 2020 IEEE Third International Conference on Data Stream Mining Processing (DSMP), Lviv, Ukraine, Aug. 2020, pp. 67–71. https://doi.org/10.1109/DSMP47368.2020.9204068

F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain.,” Psychological Review, vol. 65, no. 6, pp. 386–408, 1958. https://doi.org/10.1037/h0042519

D. H. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, no. 2, pp. 241–259, Jan. 1992. https://doi.org/10.1016/S0893-6080(05)80023-1

L. Deng, D. Yu, and J. Platt, “Scalable stacking and learning for building deep architectures,” 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, Mar. 2012, pp. 2133–2136. https://doi.org/10.1109/ICASSP.2012.6288333

Y. Bodyanskiy, I. Pliss, O. Vynokurova, D. Peleshko, and Y. Rashkevych, “Neo-fuzzy encoder and its adaptive learning for big data processing,” Information Technology and Management Science, vol. 20, no. 1, Jan. 2017. https://doi.org/10.1515/itms-2017-0001

Y. Bodyanskiy, O. Vynokurova, I. Pliss, D. Peleshko, and Y. Rashkevych, “Deep stacking convex neuro-fuzzy system and its on-line learning,” Advances in Dependability Engineering of Complex Systems, vol. 582, W. Zamojski, J. Mazurkiewicz, J. Sugier, T. Walkowiak, and J. Kacprzyk, Eds. Cham: Springer International Publishing, 2018, pp. 49–59. https://doi.org/10.1007/978-3-319-59415-6_5

Y. Bodyanskiy, G. Setlak, O. Vynokurova, I. Pliss, and O. Boiko, “Deep evolving stacking convex cascade neo-fuzzy network and its rapid learning,” Proceedings of the 2018 Federated Conference on Computer Science and Information Systems, Sep. 2018, pp. 29–33. https://doi.org/10.15439/2018F200

R. P. Landim, B. Rodrigues, S. R. Silva, and W. M. Caminhas, “A neo-fuzzy-neuron with real time training applied to flux observer for an induction motor,” Proceedings 5th Brazilian Symposium on Neural Networks (Cat. No.98EX209), Belo Horizonte, Brazil, 1998, pp. 67–72. https://doi.org/10.1109/sbrn.1998.730996

S. Kaczmarz, “Angenaherte Auflosung von Systemen linearer Gleichungen,” in Bull. Acad. Polonaise Sci. et Lettres A, pp. 355–357, 1937.

B. Widrow and M. E. Hoff, “Adaptive switching circuits,” IRE WESCON Convention Record, pp. 96–104, 1960.

S. Kaczmarz, “Approximate solution of systems of linear equations,” International Journal of Control, vol. 57, no. 6, pp. 1269–1271, Jun. 1993. https://doi.org/10.1080/00207179308934446

Ye. Bodyanskiy, I. Pliss, and V. A. Timofeev, “Discrete adaptive identification and extrapolation of two-dimensional fields,” Pattern Recognition and Image Analysis, vol. 5, no. 3, pp. 410–416, 1995.

O. Rudenko, Ye. Bodyanskiy, and I. Pliss, “Adaptivnyy algoritm prognozirovaniya sluchaynykh posledovatel'nostey,” Avtomatika, vol. 12, no. 1, pp. 46–48, 1979.

Fashion-MNIST. Zalando Research. (2021). [Online]. Available: https://github.com/zalandoresearch/fashion-mnist, Accessed on: Oct. 15, 2021.




DOI: 10.7250/itms-2021-0006

Refbacks

  • There are currently no refbacks.


Copyright (c) 2021 Olha Chala, Yevgeniy Bodyanskiy

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.