基于残差的场景流动态目标跟踪视觉SLAM算法
DOI:
CSTR:
作者:
作者单位:

新疆大学智能制造现代产业学院(机械工程学院) 乌鲁木齐 830017

作者简介:

通讯作者:

中图分类号:

TP391.4;TN.9

基金项目:

新疆维吾尔自治区自然科学基金(2022D01C673)项目资助


Residual-based visual SLAM algorithm for dynamic target tracking in scene flow
Author:
Affiliation:

College of Intelligent Manufacturing Modern Industry (School of Mechanical Engineering), Xinjiang University,Urumqi 830017,China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    大多数现有的动态同时定位和地图构建(SLAM)算法简单地移除动态对象,导致帮助系统自身定位和导航的动态对象运动信息的丢失,对于复杂和不断变化的工业环境具有局限性。本研究提出了一种改进的目标跟踪的视觉SLAM算法,在进行定位的同时,获得更准确的目标位姿估计。该算法使用背景点进行自身定位,利用细化的光流信息,减少噪点的影响,进行准确的定位,然后结合多项式残差的场景流信息,获得准确的动态目标感知结果,降低算法对目标位姿估计的误差。最后,在公开的KITTI Tracking数据集和真实场景上对所提算法进行了评估。实验结果显示,在公共数据集上,所提算法定位效果平均旋转误差(RPER)为0.027°,平均位移误差(RPET)为0.069 m。目标位姿估计平均旋转误差为0.686 97°,平均位移误差0.103 50 m,具有更好的自定位和动态目标跟踪性能。在真实场景中,所提算法也表现出良好的定位与跟踪性能。

    Abstract:

    Most existing dynamic simultaneous localization and mapping (SLAM) algorithms simply remove dynamic objects, resulting in the loss of dynamic object motion information that aids in the system′s own localization and navigation, and have limitations for complex and ever-changing industrial environments. In this paper, we propose an improved visual SLAM algorithm for target tracking that performs localization while obtaining a more accurate estimate of the object′s pose. The algorithm uses background points for its own localization, uses refined optical flow information to reduce the effect of noise for accurate localization, and then combines the scene flow information with polynomial residuals to obtain accurate dynamic object sensing results and to reduce the algorithm′s error in estimating the object′s pose. Finally, the proposed algorithm is evaluated on the publicly available KITTI Tracking dataset and real scenes. The experimental results show that on the public dataset, the proposed algorithm has an average rotation error (RPER) of 0.027° and an average displacement error (RPET) of 0.069 m. The average rotation error of object pose estimation is 0.686 97°, and the average displacement error is 0.103 50 m. The proposed algorithm is able to have a better performance of self-localization and dynamic object tracking. The proposed algorithm also shows excellent localization and tracking performance in real scenarios.

    参考文献
    相似文献
    引证文献
引用本文

刘泽峰,冉腾,肖文东,袁亮.基于残差的场景流动态目标跟踪视觉SLAM算法[J].电子测量技术,2025,48(6):38-44

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-05-08
  • 出版日期:
文章二维码