Problem in Visual SLAM algorithm

7 visualizaciones (últimos 30 días)
Sheraz Shamim
Sheraz Shamim el 29 de Jul. de 2023
Respondida: Darshak el 23 de Jun. de 2025
HI,
I am using a stereo camera, gps, imu with laser scanner to find pose estimation on a moving vehicle. When i run the example code from my custom data obtained from a normal traffic road scene, some iterations are not completed due to error 'max number of trials reach, decreasing the desired confidence.' Also, i created bag of features for my dataset. I got follwoing problems
  1. In some scenes, VSLAM got error as features are not found. so i discard the complete dataset and take new scene.
  2. Due to moving cars and bikes near and far for stereo camera, comeplete travel distance, poses are not calculated for some images and algorithm terminated there.
  3. The graph does not show complete trajectory as indicared with red line, however, the code is still running and left right images are compared.
  4. Scale factor is 1.2, it is fixed for every test or how can i assume value for my test?
  5. Parameters which need to be tweak for my data. in eaxmple numSkipFrames = 5 and numPointsKeyFrame = 80 are set.

Respuestas (1)

Darshak
Darshak el 23 de Jun. de 2025
That's a great setup for pose estimation—combining stereo cameras, IMU, GPS, and LiDAR gives you strong spatial understanding and reliable localization.. The fusion of these sensors provides good potential for accuracy, but handling urban traffic scenes often introduces complexities due to dynamic objects, occlusions, and low-feature textures. It is not uncommon to encounter dropped frames or termination in visual SLAM pipelines due to such issues.
The following points might be helpful to resolve the issue:
  • “RANSAC” Failure: “max number of trials reached, decreasing the desired confidence” - This warning typically appears when the RANSAC process cannot find enough inliers under the current confidence level or “MaxDistance”. Instead of ignoring it or discarding the scene, consider tweaking the “RANSAC” settings in your relative pose estimation function.
[E, inlierIdx] = estimateEssentialMatrix(matched1, matched2, cameraParams, ...
"Confidence", 50, ...
"MaxDistance", 5);
This could help relax the constraints and avoid early termination of the pose estimation pipeline.
  • Scenes with insufficient features - In dynamic environments, moving vehicles, pedestrians, or poor textures can lead to few detectable and trackable features. It might help to:
Increase the number of ORB features per frame using "MaxNumPoints" in “stereovslam”.
Adjust the “scaleFactor” (e.g., from 1.2 to 1.4) and “numLevels” in your feature extraction function, if you are using a custom ORB or helper function setup. This helps make the detection more robust across different scales and motion.
  • Trajectory not completing (red line stops) - The graph stopping usually means that the SLAM system failed to track a new keyframe, not that the process itself stopped. You could try tuning:
"numSkipFrames" – controls how often key frames are created. Try lowering it from 5 to 3.
"numPointsKeyFrame" – determines how many points must be tracked to qualify as a keyframe. Increasing this to 100–120 might help, or reducing it if scenes are too sparse.
These values depend heavily on your scene complexity and the feature richness of your environment.
  • Scale factor of 1.2 - This is not the map or pose scale but the ORB feature scale pyramid factor. It is not fixed across scenes and can be tuned based on scene variability. A larger value reduces the number of pyramid levels and could help in highly textured or large-scale scenes.
  • Algorithm still running but graph not updating - This is expected behavior in some of the MATLAB vSLAM demos if new keyframes are not being generated. You can check the logic inside your loop and plot statements. Also, make sure loop closure settings like LoopClosureThreshold are appropriate for your scene scale.
  • GPS/IMU Integration - Since you are using GPS and IMU as well, you may want to consider incorporating them into your estimation, either to aid visual tracking (IMU integration during poor visibility) or for trajectory correction. Although this is more advanced, fusing IMU using “factorGraph” and “poseGraph” objects can help where visual SLAM fails.
You may find it helpful to start with the following examples and documentation pages:
Overall, try iterating with small changes in your parameters and visualize the trajectory at each step. Real-world datasets often need a lot of per-scene tuning, especially when dynamics and occlusions are involved.

Categorías

Más información sobre Point Cloud Processing en Help Center y File Exchange.

Etiquetas

Productos


Versión

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by