How to establish Reinforcement Learning setup between simulink and ROS2 Gazebo?
5 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi,
I am trying to establsh communication between the Reinforcement Learning (RL) agent block defined in simulink and the Gazebo Simulator that responds to ROS2 action calls. I am using a ROS2 Service call block in Simulink, and a pythoin script is used in the middle to call the appropriate action for each service request.
In this setup, I want to ensure that every action of the RL agent is passed as a discrete service request, and that returns with a response.
However, as soon as I begin the simulation, too many service requests are continuously placed, even before the RL agent produces the next action.
I need assistance on controlling the rate of service requests placed, and especially I want the RL agent to be dictator of calling each and every service requests.
0 comentarios
Respuestas (1)
Gaurav Bhosale
el 27 de Nov. de 2023
Hi,
Look like this issue migh be due to ROS-Simulink and Gazebo are not in synchronization.
We are providing Gazebo-Simulink connectivity without ROS, which maintains syncronization between Simulink and Gazebo. You can check Gazebo Co-Sim Example. Let me know if you need further details.
Thanks
0 comentarios
Ver también
Categorías
Más información sobre Network Connection and Exploration en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!