Welcome to Journal of Beijing Institute of Technology
Volume 29Issue 1
.
Turn off MathJax
Article Contents
Minghe Cao, Jianzhong Wang. Head Motion Detection in Gaze Based Aiming[J]. JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2020, 29(1): 9-15. doi: 10.15918/j.jbit1004-0579.19106
Citation: Minghe Cao, Jianzhong Wang. Head Motion Detection in Gaze Based Aiming[J].JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2020, 29(1): 9-15.doi:10.15918/j.jbit1004-0579.19106

Head Motion Detection in Gaze Based Aiming

doi:10.15918/j.jbit1004-0579.19106
  • Received Date:2019-11-27
  • Unmanned weapons have great potential to be widely used in future wars. The gaze-based aiming technology can be applied to control pan-tilt weapon systems remotely with high precision and efficiency. Gaze direction is related to head motion, which is a combination of head and eye movements. In this paper, a head motion detection method is proposed, which is based on the fusion of inertial and vision information. The inertial sensors can measure rotation in high-frequency with good performance, while vision sensors are able to eliminate drifts. By combining the characteristics of both sensors, the proposed approach achieves the effect of high-frequency, real-time, and drift-free head motion detection. The experiments show that our method can smooth the outputs, constrain drifts of inertial measurements, and achieve high detection accuracy.
  • loading
  • [1]
    Wang Jianzhong, Zhang Guangyue, Wang Hong. Mixed weighted feature method for human eye detection[J]. Transactions of Beijing Institute of Technology, 2019, 39(8):819-824. (in Chinese)
    [2]
    Nie Xiangrong. The gaze tracking system based on head rotation information and gaze information[D]. Qinhuangdao, Hebei:Yanshan University, 2017. (in Chinese)
    [3]
    Jiang Guangyi. Research on data fusion of head movement and gaze tracking[D]. Xi'an:Xi'an Technological University, 2017. (in Chinese)
    [4]
    Luo Bin, Wang Yongtian, Liu Yue. Multi-sensor data fusion for optical tracking of head pose[J]. Acta Automatica Sinica, 2010, 36(9):1239-1249.
    [5]
    Wang Shaomei. A method of measurement of human head motion attitude[J]. Journal of Xi'an Technological University, 2011,31(5):429-433. (in Chinese)
    [6]
    Miezal M, Bleser G, Stricker D, et al. Towards practical inside-out head tracking for mobile seating bucks[C]//Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR), Atlanta, USA, 2012.
    [7]
    Gui J, Gu D, Wang S, et al. A review of visual inertial odometry from filtering and optimisation perspectives[J]. Advanced Robotics, 2015,29:1289-1301.
    [8]
    Mourikis A I, Roumeliotis S I. A multi-state constraint kalman filter for vision-aided inertial navigation[C]//IEEE International Conference on Robotics & Automation, IEEE, 2007.
    [9]
    Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//IEEE/RSJ International Conference on Intelligent Robots & Systems, IEEE, 2015:298-304.
    [10]
    Lynen S, Achtelik M W, Weiss S, et al. A robust and modular multi-sensor fusion approach applied to MAV navigation[C]//Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, IEEE, 2013.
    [11]
    Falquez J M, Kasper M, Sibley G. Inertial aided dense & semi-dense methods for robust direct visual odometry[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2016.
    [12]
    Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3):314-334.
    [13]
    Mur-Artal R, Tardos J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2):796-803.
    [14]
    Tong Q, Peiliang L, Shaojie S. VINS-Mono:a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018,34(4):1004-1020.
    [15]
    Wei F, Lianyu Z, Huanjun D, et al. Real-time motion tracking for mobile augmented/virtual reality using adaptive visual-inertial fusion[J]. Sensors, 2017, 17(5):1037.
    [16]
    Zhou Pengbo, Zhao Fuqun, Wu Zhongke. Fracture surface matching method of terracotta based on geometric features[J]. Transactions of Beijing Institute of Technology, 2019, 39(5):532-538. (in Chinese)
  • 加载中

Catalog

    通讯作者:陈斌, bchen63@163.com
    • 1.

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (431) PDF downloads(243) Cited by()
    Proportional views
    Related

    /

      Return
      Return
        Baidu
        map