Evolving and training of Neural Network to Play DAMA Board Game Using NEAT Algorithm

Banaz Anwer Qader, Kamal H. Jihad, Mohammed Rashad Baker

Abstract


Neuroevolutionary algorithms, such as NeuroEvolution of Augmenting Topologies (NEAT) in Machine Learning (ML) methods, are utilized for training and playing computer games due to increased research in the Artificial Intelligence (AI) field. NEAT is a genetic algorithm for the generation of evolving artificial neural networks. In this paper, a new study is presented. A Dama board game is designed, and the NEAT algorithm is implemented to develop and train the populations of neural networks for playing the game efficiently. Different inputs and outputs for the network are used, and various network sizes are tried for the game to reach or pass the human level. This paper aims to make a neural network that plays a Dama game-like human or close to them by training different neural networks for many generations. The experimental results show that neural networks have been trained for several thousands of generations, and they have played more than one million games. It is concluded that using more input to handle information is better for the learning process. It is also found out that a set of values for NEAT parameters is good for big neural networks like those used in this paper. 


Full Text:

PDF

References


E. Z. Elfeky et al. (2021). “A Systematic Review of Coevolution in Real-Time Strategy Games.” IEEE Access, vol. 9, pp. 136647–136665, doi: 10.1109/ACCESS.2021.3115768.

H. L. J. van der Maas, L. Snoek, and C. E. Stevenson (2021). “How much intelligence is there in artificial intelligence? A 2020 update”. Intelligence, vol. 87, pp. 101548, doi: 10.1016/j.intell.2021.101548.

M. Hausknecht, J. Lehman, R. Miikkulainen, and P. Stone (2014). “A neuroevolution approach to general atari game playing”. IEEE Transactions on Computational Intelligence and AI in Games, vol. 6, no. 4, pp. 355–366, doi: 10.1109/TCIAIG.2013.2294713.

S. Risi and J. Togelius (2017). “Neuroevolution in games: State of the art and open challenges”. IEEE Transactions on Computational Intelligence and AI in Games, vol. 9, no. 1, PP. 25-41, doi: 10.1109/TCIAIG.2015.2494596.

M. Bichler, M. Fichtl, S. Heidekrüger, N. Kohring, and P. Sutterer (2021). “Learning equilibria in symmetric auction games using artificial neural networks”. Nature machine intelligence, vol. 3, no. 8, pp. 687–695, doi: 10.1038/s42256-021-00365-4.

S. M. Miraftabzadeh, M. Longo, F. Foiadelli, M. Pasetti, and R. Igual (2021). “Advances in the application of machine learning techniques for power system analytics: A survey†”. Energies, vol. 14, no. 16, pp. 11–14, doi: 10.3390/en14164776.

X. Yang et al. (2021). “Research and applications of artificial neural network in pavement engineering: A state-of-the-art review”. Journal of traffic and transportation engineering (English Ed.), vol. 8, pp. 1-22, doi: 10.1016/j.jtte.2021.03.005.

Simonov, A. Zagarskikh, and V. Fedorov (2019). “Applying Behavior characteristics to decision-making process to create believable game AI”. Procedia Computer Science, vol. 156, pp. 404–413, doi: 10.1016/j.procs.2019.08.222.

P. García-Sánchez, A. Tonda, A. M. Mora, G. Squillero, and J. J. Merelo (2018). “Automated playtesting in collectible card games using evolutionary algorithms: A case study in hearthstone”. Knowledge-Based Systems, vol. 153, pp. 133–146, doi: 10.1016/j.knosys.2018.04.030.

T. M. Martins and R. F. Neves (2020). “Applying genetic algorithms with speciation for optimization of grid template pattern detection in financial markets”. Expert Systems with Applications, vol. 147, pp. 113191, doi: 10.1016/j.eswa.2020.113191.

K. O. Stanley and R. Miikkulainen (2002). “Evolving neural networks through augmenting topologies”. ," Evolutionary Computation, MIT Press, vol. 10, no. 2, pp. 99–127,, doi: 10.1162/106365602320169811.

K. C. Chatzidimitriou and P. A. Mitkas (2013). “Adaptive reservoir computing through evolution and learning”. Neurocomputing, vol. 103, pp. 198–209, doi: 10.1016/j.neucom.2012.09.022.

S. Lang, T. Reggelin, J. Schmidt, M. Müller, and A. Nahhas (2021). “NeuroEvolution of augmenting topologies for solving a two-stage hybrid flow shop scheduling problem: A comparison of different solution strategies”. Expert Systems with Applications, vol. 172,

doi: 10.1016/j.eswa.2021.114666.

M. Şahin and R. Erol (2017) “A Comparative Study of Neural Networks and ANFIS for Forecasting Attendance Rate of Soccer Games”. Mathematical and computational applications, vol. 22, no. 4, p. 43, doi: 10.3390/mca22040043.

S. Murat H. (2006). “A brief review of feed-forward neural networks,” Commun. Fac. Sci. Univ. Ankara, vol. 50, no. 1, pp. 11–17, doi: 10.1501/commua1-2_0000000026.

M. Giannakos, I. Voulgari, S. Papavlasopoulou, Z. Papamitsiou, and G. Yannakakis (2020). “Games for artificial intelligence and machine learning education: Review and perspectives”. In Lecture Notes in Educational Technology, Springer Science and Business Media Deutschland GmbH, pp. 117–133.

K. Chellapilla and D. B. Fogel (1999). “Evolving neural networks to play checkers without relying on expert knowledge”. IEEE transactions on neural networks, vol. 10, no. 6, pp. 1382–1391, doi: 10.1109/72.809083.

J. Schrum, R. M.-P. (July 2014). “Evolving multimodal behavior with modular neural networks in Ms. Pac-Man”. in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Canada, dl.acm.org, pp. 325–332, doi: 10.1145/2576768.2598234.

C. Clark and A. Storkey (July 2015). “Training deep convolutional neural networks to play go”. In 32nd International Conference on Machine Learning (ICML), France, vol. 3, pp. 1766–1774, Accessed: Dec. 10, 2021. [Online]. Available: http://proceedings.mlr.press/v37/clark15.html.

T. Boris and S. Goran (2016). “Evolving neural network to play game 2048”. In 2016 24th Telecommunications Forum (TELFOR), Belgrade, Serbia, doi: 10.1109/TELFOR.2016.7818911.

M. G. Cordeiro, P. B. S. Serafim, Y. L. B. Nogueira, C. A. Vidal, and J. B. Cavalcante Neto (Oct. 2019). “A Minimal Training Strategy to Play Flappy Bird Indefinitely with NEAT”. In 2019 18th Brazilian Symposium on Games and Digital Entertainment, (SBGAMES), Rio de Janeiro, Brazil, vol. 2019-Oct., pp. 21–28, doi: 10.1109/SBGames.2019.00014.

D. Perez-Liebana, M. S. Alam, and R. D. Gaina (Aug. 2020). “Rolling Horizon NEAT for General Video Game Playing”. In IEEE Conference on Games (CoG), Osaka, Japan, pp. 375–382, doi: 10.1109/CoG47356.2020.9231606.

M. Wittkamp, L. Barone, and P. Hingston (2008). “Using NEAT for continuous adaptation and teamwork formation in pacman”. In 2008 IEEE Symposium on Computational Intelligence and Games (CIG), 2008, pp. 234–242, doi: 10.1109/CIG.2008.5035645.

A. L. A. Paulino, Y. L. B. Nogueira, J. P. P. Gome, C. L. C. Mattos, and L. R. Rodrigues (2020). “On the Use of Cultural Enhancement Strategies to Improve the NEAT Algorithm”. In IEEE Congress on Evolutionary Computation (CEC), doi: 10.1109/CEC48606.2020.9185847.




DOI: https://doi.org/10.31449/inf.v46i5.3897

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.