Reinforcement Learning algorithm for modelling software based on fuzzy cognitive maps

Authors

  • Iván Santana Ching UCLV
  • Ariel Barreiros Universidad Central Marta Abreu de Las Villas
  • Richar Sosa Universidad Central Marta Abreu de Las Villas

Keywords:

Machine Learning; Reinforcement Learning; Fuzzy Cognitive Maps

Abstract

Fuzzy Cognitive Maps are a powerful tool that can be used to model complex systems with undetermined dynamics, in addition to being interpretable. However, sometimes it is difficult to determine precisely the relationships that occur between the concepts of a system. In previous research, a software library was designed and developed that is capable of creating this type of model and adjusting it with good precision. In order to achieve a good fit of the weight matrices of a model using the available learning algorithm, it is necessary to develop it from a specific set of values. In this research, a new Automatic Learning algorithm was introduced to the library, which uses Reinforced Learning techniques. This allows for better adjustment of the weight matrices, even when the learning is faced with uncertainty in the initialization of the model values. The results reflect that a model obtained using the library with the modifications, fits correctly to the behavior of the system that emulates in a greater number of situations. The quality of the model is directly related to the iterations that are made to train it, being favorable an increase of them. To obtain the results, simulation data of an RLC circuit was used, to which a noise signal was added to achieve a greater similarity to real process data.

References

Chen, R. Y. (2018). A traceability chain algorithm for artificial neural networks using T–S fuzzy cognitive maps in blockchain. Future Generation Computer Systems, 80, 198-210.

Cielen, D., Meysman, A. & Ali, M. (2016). Introducing data science: big data, machine learning, and more, using Python tools: Manning Publications Co.

Fang, M., Li, Y. & Cohn, T. (2017). Learning how to Active Learn: A Deep Reinforcement Learning Approach. Paper presented at the Conference on Empirical Methods in Natural Language Processing.

François-Lavet, V., Henderson, P., Islam, R., Bellemare, MG. & Pineau, J. (2018). An introduction to deep reinforcement learning. Foundations Trends® in Machine Learning, 11(3-4), 219-354.

George, G., Osinga, E. C., Lavie, D. & Scott, B. A. (2016). Big data and data science methods for management research. In: Academy of Management Briarcliff Manor, New York.

Hirasawa, T., Aoyama, K., Tanimoto, T., Ishihara, S., Shichijo, S., Ozawa, T., . . . Fujisaki, J. (2018). Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer, 21(4), 653-60.

Jenitha, G. y Kumaravel, A. (2014). An Instance of Reinforcement Learning Based on Fuzzy Cognitive Maps. International Journal of Applied Engineering Research, 9(18), 3913-20.

Kosko, B. (1986). Fuzzy cognitive maps. International journal of man-machine studies, 24(1), 65-75.

Lange, S., Gabel, T. & Riedmiller, M. (2012). Batch Reinforcement Learning. In M. Wiering y M. van Otterlo (Eds.), Reinforcement Learning: State-of-the-Art (pp. 45-73). Berlin, Heidelberg: Springer Berlin Heidelberg.

Madruga, A., Alvarado, Y., Sosa, R., Santana, I. y Mesa, J. R. (2019). Modelo de crecimiento y desarrollo de hortalizas en casas de cultivo mediante mapas cognitivos difusos. Revista Cubana de Ciencias Informáticas, 13(2), 47-60.

Mendonça, M., Chrun, I. R., Neves Jr, F. & Arruda, L. V. (2017). A cooperative architecture for swarm robotic based on dynamic fuzzy cognitive maps. Engineering Applications of Artificial Intelligence, 59, 122-132.

Polydoros, A. S. & Nalpantidis, L. (2017). Survey of model-based reinforcement learning: Applications on robotics. Journal of Intelligent Robotic Systems, 86(2), 153-173.

Sewak, M. (2019). Q-Learning in Code. In Deep Reinforcement Learning (pp. 65-74): Springer.

Sosa, R., Alfonso, A., Nápoles, G., Bello, R., Vanhoof, K. & Nowé, A. (2019). Synaptic Learning of Long-Term Cognitive Networks with Inputs. Paper presented at the 2019 International Joint Conference on Neural Networks (IJCNN).

Sutton, R. S. & Barto, A. G. (2018). Reinforcement learning: An introduction: MIT press.

Topol, EJ. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44-56.

Venkatasubramanian, V. (2019). The promise of artificial intelligence in chemical engineering: Is it here, finally? AIChE Journal, 65(2), 466-478. doi:10.1002/aic.16489

Wu, L., Tian, F., Qin, T., Lai, J. & Liu, T.Y. (2018). A study of reinforcement learning for neural machine translation. arXiv preprint arXiv:.08866.

Yin, S., Li, X., Gao, H. & Kaynak, O. (2014). Data-based techniques focused on modern industry: An overview. IEEE Transactions on Industrial Electronics, 62(1), 657-67.

Yousefi, F. & Amoozandeh, Z. (2016). Statistical mechanics and artificial intelligence to model the thermodynamic properties of pure and mixture of ionic liquids. Chinese Journal of Chemical Engineering, 24(12), 1761-71.

Zhang, D., Han, X. & Deng, C. (2018). Review on the research and practice of deep learning and reinforcement learning in smart grids. CSEE Journal of Power Energy Systems, 4(3), 362-70.

Published

2021-03-07

How to Cite

Santana Ching, I., Barreiros, A., & Sosa, R. (2021). Reinforcement Learning algorithm for modelling software based on fuzzy cognitive maps. Revista Cubana De Transformación Digital, 2(1), 66–78. Retrieved from https://rctd.uic.cu/rctd/article/view/97

Issue

Section

Originial paper