TY - GEN
T1 - A Depth-Added Visual-Inertial Odometry Based on MEMS IMU with Fast Initialization
AU - Zhang, Yuhan
AU - Zhao, Haocheng
AU - Du, Shuang
AU - Yu, Limin
AU - Wang, Xinheng
N1 - Funding Information:
ACKNOWLEDGMENT This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 52175030, in part by the Special Fund for Key Projects of Xi’an Jiaotong-Liverpool University under Grant KSF-E-64, and in part by the Research and Development Fund of Xi’an Jiaotong-Liverpool University under Grant RDF-19-01-14 and Grant RDF-20-01-15. We would also like to thank Suzhou Inteleizhen Intelligent Technology Co. Ltd. for its financial support.
Publisher Copyright:
© 2022 IEEE.
PY - 2022/12/16
Y1 - 2022/12/16
N2 - Recently, visual-inertial odometry (VIO) has been widely adopted for tracking the movement of robots in simultaneous localization and mapping (SLAM). In this paper, a low-cost and highly accurate depth-Added VIO framework is proposed for robots in indoor environments by taking advantage of an RGB-D camera and a micro-electromechanical system (MEMS) inertial measurement unit (IMU). Movement estimation is achieved by IMU pre-integration and visual tracking in this tightly coupled framework. Meanwhile, an empirical IMU model is developed by using Allan variance analysis to guarantee the accuracy of the estimated errors. Images with depth information are deployed during initialization to achieve a fast response. Extensive experiments are conducted to validate the effectiveness and its performance by comparing it with other advanced schemes in indoor scenarios. The results show that the scale drift error is reduced to 2.6 % and the response time of the initialization process is improved by about 124 % compared to its counterpart.
AB - Recently, visual-inertial odometry (VIO) has been widely adopted for tracking the movement of robots in simultaneous localization and mapping (SLAM). In this paper, a low-cost and highly accurate depth-Added VIO framework is proposed for robots in indoor environments by taking advantage of an RGB-D camera and a micro-electromechanical system (MEMS) inertial measurement unit (IMU). Movement estimation is achieved by IMU pre-integration and visual tracking in this tightly coupled framework. Meanwhile, an empirical IMU model is developed by using Allan variance analysis to guarantee the accuracy of the estimated errors. Images with depth information are deployed during initialization to achieve a fast response. Extensive experiments are conducted to validate the effectiveness and its performance by comparing it with other advanced schemes in indoor scenarios. The results show that the scale drift error is reduced to 2.6 % and the response time of the initialization process is improved by about 124 % compared to its counterpart.
KW - Micro-electromechanical systems
KW - RGB-D camera
KW - trajectory estimation
KW - visual-inertial odometry
UR - http://www.scopus.com/inward/record.url?scp=85153683345&partnerID=8YFLogxK
U2 - 10.1109/HCCS55241.2022.10090346
DO - 10.1109/HCCS55241.2022.10090346
M3 - Conference Proceeding
AN - SCOPUS:85153683345
T3 - Proceedings - 2022 International Conference on Human-Centered Cognitive Systems, HCCS 2022
BT - Proceedings - 2022 International Conference on Human-Centered Cognitive Systems, HCCS 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Conference on Human-Centered Cognitive Systems, HCCS 2022
Y2 - 17 December 2022 through 18 December 2022
ER -