How to add custom environment for Reinforcement learning toolbox?
7 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
shahin sarhan
el 25 de Oct. de 2023
Comentada: Emmanouil Tzorakoleftherakis
el 28 de Oct. de 2023
I want to make a 3d environment representing a neighborhood containing some blocks as the buildings. I want to use this model as the environment in the toolbox for the agent to interact with and find the shortest path. How should I do that? I'm having difficulties defining this using classes.
0 comentarios
Respuestas (1)
Emmanouil Tzorakoleftherakis
el 27 de Oct. de 2023
Why do you need a 3d world for this problem? Unless you consider the z dimension (e.g. if you do planning for UAVs), you only need a 2d env. Even if you were to do it for visualization, I wouldn't recommend training in the 3d world since it would only make training slower. I would start with a grid world or some occupancy grid that you can tailor to match the 3d world you have in mind:
2 comentarios
Emmanouil Tzorakoleftherakis
el 28 de Oct. de 2023
You can also create 3d occupancy maps like this:
https://www.mathworks.com/help/uav/ug/generate-random-3-d-occupancy-map-for-uav-motion-planning.html
and turn them into RL training environments by following the example I shared earlier:
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!