Welcome to Journal of Beijing Institute of Technology
Volume 31Issue 5
Oct. 2022
Turn off MathJax
Article Contents
Hao Wang. Optimization and Design of Cloud-Edge-End Collaboration Computing for Autonomous Robot Control Using 5G and Beyond[J]. JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2022, 31(5): 454-463. doi: 10.15918/j.jbit1004-0579.2022.023
Citation: Hao Wang. Optimization and Design of Cloud-Edge-End Collaboration Computing for Autonomous Robot Control Using 5G and Beyond[J].JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2022, 31(5): 454-463.doi:10.15918/j.jbit1004-0579.2022.023

Optimization and Design of Cloud-Edge-End Collaboration Computing for Autonomous Robot Control Using 5G and Beyond

doi:10.15918/j.jbit1004-0579.2022.023
More Information
  • Author Bio:

    Hao Wangreceived his master’s degree in traffic control and information engineering from Beijing Jiaotong University, Beijing, China, in 2009. He is a senior engineer at China Railway Siyuan Survey and Design Group Co. He has experience in the opening of many urban rail transit projects and is involved in the design and research business covering the main weak-electricity specialties of rail transit. He mainly engaged in rail transit data communication research

  • Corresponding author:004482@crfsdi.com
  • Received Date:2022-02-28
  • Rev Recd Date:2022-03-19
  • Accepted Date:2022-04-02
  • Publish Date:2022-10-31
  • Robots have important applications in industrial production, transportation, environmental monitoring and other fields, and multi-robot collaboration is a research hotspot in recent years. Multi-robot autonomous collaborative tasks are limited by communication, and there are problems such as poor resource allocation balance, slow response of the system to dynamic changes in the environment, and limited collaborative operation capabilities. The combination of 5G and beyond communication and edge computing can effectively reduce the transmission delay of task offloading and improve task processing efficiency. First, this paper designs a robot autonomous collaborative computing architecture based on 5G and beyond and mobile edge computing(MEC). Then, the robot cooperative computing optimization problem is studied according to the task characteristics of the robot swarm. Then, a reinforcement learning task offloading scheme based on Q-learning is further proposed, so that the overall energy consumption and delay of the robot cluster can be minimized. Finally, simulation experiments demonstrate that the method has significant performance advantages.
  • loading
  • [1]
    I. Malỳ, D. Sedláček, and P. Leitao, “Augmented reality experiments with industrial robot in industry 4.0 environment,” in 2016 IEEE 14th International Conference on Industrial Informatics (INDIN), pp. 176-181, 2016.
    [2]
    T. Ribeiro, I. Garcia, D. Pereira, J. Ribeiro, G. Lopes, and A. F. Ribeiro, “Development of a prototype robot for transportation within industrial environments,” in 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 192-197, 2017.
    [3]
    M. Dunbabin and L. Marques,“Robots for environmental monitoring: Significant advancements and applications,” IEEE Robotics & Automation Magazine, vol. 19, no. 1, pp. 24-39, 2012.
    [4]
    Y. Rizk, M. Awad, and E. W. Tunstel,“Cooperative heterogeneous multi-robot systems: A survey,” ACM Computing Surveys (CSUR), vol. 52, no. 2, pp. 1-31, 2019.
    [5]
    J. Dumora, F. Geffard, C. Bidard, T. Brouillet, and P. Fraisse, “Experimental study on haptic communication of a human in a shared human-robot collaborative task,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5137-5144, 2012.
    [6]
    M. Shafi, A. F. Molisch, J. Smith, T. Haustein, Z. Hu, De Silva, F. Tufvesson, A. Benjebbour, and G. Wunder,“5G: A tutorial overview of standards, trials, challenges, deployment, and practice,” IEEE Journal on Selected Areas in Communications, vol. 35, no. 6, pp. 1201-1221, 2017. doi:10.1109/JSAC.2017.2692307
    [7]
    L. Chettri and R. Bera,“A comprehensive survey on internet of things (IoT) toward 5G wireless systems,” IEEE Internet of Things Journal, vol. 7, no. 1, pp. 16-32, 2019.
    [8]
    F. Voigtländer, A. Ramadan, J. Eichinger, C. Lenz, D. Pensky, and A. Knoll, “5G for robotics: Ultra-low latency control of distributed robotic systems,” in 2017 International Symposium on Computer Science and Intelligent Controls (ISCSIC), pp. 69-72, 2017.
    [9]
    I. Elhajj, C. M. Kit, Y. H. Liu, N. Xi, A. Goradia, and T. Fukuda, “Tele-coordinated control of multi-robot systems via the internet,” in 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 2, pp. 1646-1652, 2003.
    [10]
    N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie,“Mobile edge computing: A survey,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450-465, 2017.
    [11]
    S. Kekki, W. Featherstone, Y. Fang, P. Kuure, A. Li, A. Ranjan, D. Purkayastha, J. P. Feng, D. Frydman, and G. Verin, “MEC in 5G networks,” ETSI White Paper, vol. 28, pp. 1-28, 2018.
    [12]
    K. Kumar, J. Liu, Y. H. Lu, and B. Bhargava,“A survey of computation offloading for mobile systems,” Mobile Networks and Applications, vol. 18, no. 1, pp. 129-140, 2013. doi:10.1007/s11036-012-0368-0
    [13]
    A. Rahman, J. Jin, A. Rahman, A. Cricenti, M. Afrin, and Y. N. Dong, “Energy-efficient optimal task offloading in cloud networked multi-robot systems,” Computer Networks, vol. 160, pp. 11-32, 2019.
    [14]
    L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement learning: A survey,” Journal of Artificial Intelligence Research, vol. 4, pp. 237-285, 1996.
    [15]
    R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. Boston, USA: MIT Press, 2018.
    [16]
    R. Nian, J. Liu, and B. Huang,“A review on reinforcement learning: Introduction and applications in industrial process control,” Computers & Chemical Engineering, vol. 139, pp. 106886, 2020.
    [17]
    H. Lu, Y. Li, S. Mu, D. Wang, H. Kim, and S. Serikawa,“Motor anomaly detection for unmanned aerial vehicles using reinforcement learning,” IEEE Internet of Things Journal, vol. 5, no. 4, pp. 2315-2322, 2017.
    [18]
    J. He, J. Chen, X. He, J. Gao, L. Li, L. Deng, and M. Ostendorf, “Deep reinforcement learning with a natural language action space,” arXiv preprint arXiv: 1511.04636, 2015.
    [19]
    J. P. Queralta, L. Qingqing, Z. Zou, and T. Westerlund, “Enhancing autonomy with blockchain and multi-access edge computing in distributed robotic systems,” in 2020 Fifth International Conference on Fog and Mobile Edge Computing (FMEC), pp. 180-187, 2020.
    [20]
    T. Fong, C. Thorpe, and C. Baur,“Multi-robot remote driving with collaborative control,” IEEE Transactions on Industrial Electronics, vol. 50, no. 4, pp. 699-704, 2003. doi:10.1109/TIE.2003.814768
    [21]
    Y. Wen, W. Zhang, and H. Luo, “Energy-optimal mobile application execution: Taming resource-poor mobile devices with cloud clones,” in 2012 Proceedings IEEE Infocom, pp. 2716–2720, 2012.
    [22]
    X. Chen, L. Jiao, W. Li, and X. Fu,“Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795-2808, 2015.
    [23]
    K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan, and Y. Zhang, “Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks,” IEEE Access, vol. 4, pp. 5896-5907, 2016.
    [24]
    M. Van Otterlo and M. Wiering, “Reinforcement learning and markov decision processes,” in Reinforcement learning. Berlin: Springer, 2012, pp. 3-42.
    [25]
    C. J. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3-4, pp. 279-292, 1992.
  • 加载中

Catalog

    通讯作者:陈斌, bchen63@163.com
    • 1.

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)

    Article Metrics

    Article views (186) PDF downloads(30) Cited by()
    Proportional views
    Related

    /

    Return
    Return
      Baidu
      map