Evolving recurrent neural network controllers by incremental fitness shaping

Akinci, Kaan and Philippides, Andrew (2019) Evolving recurrent neural network controllers by incremental fitness shaping. ALIFE 2019, Newcastle upon Tyne, UK, 29 July - 2nd August, 2019. Published in: Bacardit, Jaume, Fellermann, Harold, Goñi-Moreno, Angel and Füchslin, Rudolf Marcel, (eds.) Proceedings of the ALIFE Conference 2019 (ALIFE 2019): A Hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the Synthesis and Simulation of Living Systems (ALIFE). 416-423. The MIT Press

[img] PDF - Published Version
Available under License Creative Commons Attribution.

Download (2MB)
[img] PDF - Accepted Version
Available under License Creative Commons Attribution.

Download (1MB)


Time varying artificial neural networks are commonly used for dynamic problems such as games controllers and robotics as they give the controller a memory of what occurred in previous states which is important as actions in previous states can influence the final success of the agent. Because of this temporal dependence, methods such as back-propagation can be difficult to use to optimise network parameters and so genetic algorithms (GAs) are often used instead. While recurrent neural networks (RNNs) are a common network used with GAs, long short term memory (LSTM) networks have had less attention. Since, LSTM networks have a wide range of temporal dynamics, in this paper, we evolve an LSTM network as a controller for a lunar lander task with two evolutionary algorithms: a steady state GA (SSGA) and an evolutionary strategy (ES). Due to the presence of a large local optima in the fitness space, we implemented an incremental fitness scheme to both evolutionary algorithms. We also compare the behaviour and evolutionary progress of the LSTM with the behaviour of an RNN evolved via NEAT and ES with the same fitness function. LSTMs proved themselves to be evolvable on such tasks, though the SSGA solution was outperformed by the RNN. However, despite using an incremental scheme, the ES developed solutions far better than both showing that ES can be used both for incremental fitness and for LSTMs and RNNs on dynamic tasks.

Item Type: Conference Proceedings
Schools and Departments: School of Engineering and Informatics > Informatics
Research Centres and Groups: Centre for Computational Neuroscience and Robotics
Related URLs:
Depositing User: Lucy Arnold
Date Deposited: 05 Jun 2019 14:12
Last Modified: 08 Aug 2019 10:57
URI: http://sro.sussex.ac.uk/id/eprint/84097

View download statistics for this item

📧 Request an update