Using RL, How to train multi-agents such that each agent will navigate from its initial position to goal position avoiding collisions?
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Let's assume there are a set of agents that are spread into 3d cartesian space. A trajectory should be generated for each agent such that if an agent would follow its trajectory while heading to the goal waypoint, no collision would happen with other agents. Any guidance to solve such a task would be highly appreciated
0 comentarios
Respuestas (1)
Emmanouil Tzorakoleftherakis
el 5 de Mzo. de 2021
Editada: Emmanouil Tzorakoleftherakis
el 5 de Mzo. de 2021
It's possible that the scenario you described can be solved by training a single agent, and then "deploying" that trained agent to all uavs/uuvs in your fleet. That would make the problem easier and less expensive to train. For a 2D example, take a look at this.
3 comentarios
Emmanouil Tzorakoleftherakis
el 6 de Mzo. de 2021
I think it's a matter of what inputs you provide to the policy and the coordinate system you use (although I was thinking the scenario where each agent has its own sensors). If you only use odometry data from all agents, I guess you could transform it to distance from each nearby agent (include heading/bearing probably) and feed all this info into the policy.
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!