Ego-motion compensation in the Grid-based Tracking in Urban Environment example

8 visualizaciones (últimos 30 días)
Hello support team,
I am currently working on the grid-based tracker (trackerGridRFS) and have already been exploring the following example:
To further understand what is happening in detail inside the tracking algorithm, I was asking me, where the ego-vehicle motion compensation is placed. As far as I understand, the grid has to be transformed before it can be updated by new measurements (e.g. translating and rotating it by the vehicle's movement). I also searched in the support classes like "MeasurementEvidenceMap" but could not find it.
Thanks in advance for any hints and help!
Best regards,
Steffen

Respuesta aceptada

Prashant Arora
Prashant Arora el 12 de Nov. de 2020
Hi Steffen,
The grid-based tracker (trackerGridRFS) estimates a local or ego-centric dynamic occupancy grid map i.e the dynamic occupancy grid map is always aligned with the current position and orientation of the ego vehicle. In order to estimate the dynamic grid map from sensor-level measurements, the tracker needs mainly two transforms. The first transform is required to account for position and orientation of the sensor with respect to the ego vehicle or grid. The second transform is required to account for position and orientation of the ego vehicle with respect to the world or scenario frame.
The tracker allows you to supply both these transforms at each step using the sensor configurations input. See SensorConfigurations and HasSensorConfigurationsInput property of the tracker.
In the example, this input is calculated by helper function helperGetLidarConfig provided here in the example. The helper uses the ground truth information about the ego vehicle to calculate this information. In real-world systems, this information about ego position and orientation is typically obtained by INS filters.
The tracker does the following for motion compensation:
  1. The tracker estimates the grid using a particle filter. The states of the particles are represented in the world coordinate frame (thus allowing state estimation in a global sense).
  2. The tracker projects the particles to the local grid using the ego-to-scenario transformation.
  3. The tracker projects the sensor data to the local grid using the sensor-to-ego transformation.
  4. Both particle data and sensor data gets fused at the local grid level.
Hope this helps.
Thanks,
Prashant
  2 comentarios
Steffen Keller
Steffen Keller el 13 de Nov. de 2020
Hello Prashant,
tanks a lot for this detailed answer. That was exactly what I was looking for. So this tracker needs the absolute position of the ego with respect to the global coordinate system's origin rather than movement parameters (x-/y-speed and orientation), right?
Best regards,
Steffen
Prashant Arora
Prashant Arora el 13 de Nov. de 2020
Editada: Prashant Arora el 13 de Nov. de 2020
You are right, Steffen.
Any technique outside the tracker can be used to calculate this information - ranging from simple dead-reckoning to ins filtering using IMU/GPS/Cameras. The results will of course vary depending on accuracy of this information.
Thanks,
Prashant

Iniciar sesión para comentar.

Más respuestas (0)

Productos


Versión

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by