Citation: | Minghe Cao, Jianzhong Wang. Head Motion Detection in Gaze Based Aiming[J].JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2020, 29(1): 9-15.doi:10.15918/j.jbit1004-0579.19106 |
[1] |
Wang Jianzhong, Zhang Guangyue, Wang Hong. Mixed weighted feature method for human eye detection[J]. Transactions of Beijing Institute of Technology, 2019, 39(8):819-824. (in Chinese)
|
[2] |
Nie Xiangrong. The gaze tracking system based on head rotation information and gaze information[D]. Qinhuangdao, Hebei:Yanshan University, 2017. (in Chinese)
|
[3] |
Jiang Guangyi. Research on data fusion of head movement and gaze tracking[D]. Xi'an:Xi'an Technological University, 2017. (in Chinese)
|
[4] |
Luo Bin, Wang Yongtian, Liu Yue. Multi-sensor data fusion for optical tracking of head pose[J]. Acta Automatica Sinica, 2010, 36(9):1239-1249.
|
[5] |
Wang Shaomei. A method of measurement of human head motion attitude[J]. Journal of Xi'an Technological University, 2011,31(5):429-433. (in Chinese)
|
[6] |
Miezal M, Bleser G, Stricker D, et al. Towards practical inside-out head tracking for mobile seating bucks[C]//Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR), Atlanta, USA, 2012.
|
[7] |
Gui J, Gu D, Wang S, et al. A review of visual inertial odometry from filtering and optimisation perspectives[J]. Advanced Robotics, 2015,29:1289-1301.
|
[8] |
Mourikis A I, Roumeliotis S I. A multi-state constraint kalman filter for vision-aided inertial navigation[C]//IEEE International Conference on Robotics & Automation, IEEE, 2007.
|
[9] |
Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//IEEE/RSJ International Conference on Intelligent Robots & Systems, IEEE, 2015:298-304.
|
[10] |
Lynen S, Achtelik M W, Weiss S, et al. A robust and modular multi-sensor fusion approach applied to MAV navigation[C]//Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, IEEE, 2013.
|
[11] |
Falquez J M, Kasper M, Sibley G. Inertial aided dense & semi-dense methods for robust direct visual odometry[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2016.
|
[12] |
Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3):314-334.
|
[13] |
Mur-Artal R, Tardos J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2):796-803.
|
[14] |
Tong Q, Peiliang L, Shaojie S. VINS-Mono:a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018,34(4):1004-1020.
|
[15] |
Wei F, Lianyu Z, Huanjun D, et al. Real-time motion tracking for mobile augmented/virtual reality using adaptive visual-inertial fusion[J]. Sensors, 2017, 17(5):1037.
|
[16] |
Zhou Pengbo, Zhao Fuqun, Wu Zhongke. Fracture surface matching method of terracotta based on geometric features[J]. Transactions of Beijing Institute of Technology, 2019, 39(5):532-538. (in Chinese)
|