Implementing NEAT algorithm in Simulink
Mostrar comentarios más antiguos
I am trying to modify the walking robot example to use the NEAT algorithm, but I don't see any way of building an evolving neural network using the reinforcement learning toolbox. I was thinking that maybe somehow I could incorporate Python-NEAT? I am new so I'm not really sure what's possible.
Respuestas (1)
Tushar Sharma
el 19 de Mzo. de 2024
0 votos
Hi Allegra,
I assume that you are referring to 'msra-walking-robot' - https://github.com/mathworks/msra-walking-robot.
The algorithm used in this case is Deep Deterministic Policy Gradient (DDPG), chosen for the following reasons:
- It learns with environments with continuous action spaces
- Deterministic policies are faster to learn
If you want to write your own environment and training algorithm, the documents listed below servers as a good starting point:
- https://www.mathworks.com/help/reinforcement-learning/matlab-environments.html
- https://www.mathworks.com/help/reinforcement-learning/training-and-validation.html
Also, If you want to incorporate NEAT-Python, then you may leverage MATLAB's ability to call Python functions to directly interact with the NEAT-Python implementation. This means you'll be running the simulation in MATLAB, and each time you need to evaluate a genome's fitness, you'll call the simulation from your Python code through MATLAB.
Here are the steps to follow:
- Ensure Python is installed.
- Next, install NEAT-Python by running `pip install neat-python`.
- Finally, set up MATLAB to utilize Python by configuring the correct Python environment with the `pyenv` function.
The following resources would help-
Categorías
Más información sobre Reinforcement Learning en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!