Welcome to Journal of Beijing Institute of Technology
Volume 32Issue 1
Feb. 2023
Turn off MathJax
Article Contents
Jian Han, Jialu Li, Meng Liu, Zhe Ren, Zhimin Cao, Xingbin Liu. Salient Object Detection Based on a Novel Combination Framework Using the Perceptual Matching and Subjective-Objective Mapping Technologies[J]. JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2023, 32(1): 95-106. doi: 10.15918/j.jbit1004-0579.2022.078
Citation: Jian Han, Jialu Li, Meng Liu, Zhe Ren, Zhimin Cao, Xingbin Liu. Salient Object Detection Based on a Novel Combination Framework Using the Perceptual Matching and Subjective-Objective Mapping Technologies[J].JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2023, 32(1): 95-106.doi:10.15918/j.jbit1004-0579.2022.078

Salient Object Detection Based on a Novel Combination Framework Using the Perceptual Matching and Subjective-Objective Mapping Technologies

doi:10.15918/j.jbit1004-0579.2022.078
Funds:The associate editor coordinating the review of this manuscript was Dr. Na Liu. This work was supported by the National Natural Science Foundation of China (No. 52174021) and Key Research and Development Project of Hainan Province (No. ZDYF2022GXJS003).
More Information
  • Author Bio:

    Jian Hanis currently a professor and doctoral supervisor at the school of physics and electronic engineering, Northeast Petroleum University. His current research interests include oilfield intelligent detection and big data analysis

    Jialu Lireceived the B.E. degree from Xi’an University of Technology, China, 2020. She is currently studying for a master’s degree at the school of physics and electronic engineering, Northeast Petroleum University, China. Her research interests include salient object detection, machine learning and computer vision

    Meng Liureceived the B.E. degree from Nanjing University of Information Engineering in 2020. Her research interests include machine learning, modern signal processing and spatiotemporal big data analysis

    Zhe Renreceived the B.E. degree from Yantai University in 2021. Her research interests include salient object detection, machine learning, and computer vision

    Zhimin Caoreceived the Ph.D. degree from Harbin University of Technology in 2016, and completed the post doctoral work of geological resources and geological engineering of Daqing Oilfield post doctoral research workstation/Northeast Petroleum University post doctoral research mobile station in June 2020. Now he is an associate professor and master’s supervisor of the School of Physics and Electronic Engineering of Northeast Petroleum University. His main research interests include industrial big data analysis and artificial intelligence knowledge mining, fine description and recognition of geological resources, image processing and pattern recognition

    Xingbin Liureceived the Ph.D. degree in precision instruments and machinery from Harbin Institute of Technology in 1996, and completed post doctoral work in the earth exploration and information discipline of Daqing Oilfield Petroleum University in 2001. Now he is a professor level senior engineer of Northeast Petroleum University and the chief engineer of Daqing testing technology service branch. His main research interests include intelligent water injection and logging tools

  • Corresponding author:caozhimin@nepu.edu.cn
  • Received Date:2022-07-14
  • Rev Recd Date:2022-08-26
  • Accepted Date:2022-10-28
  • Publish Date:2023-02-28
  • The integrity and fineness characterization of non-connected regions and contours is a major challenge for existing salient object detection. The key to address is how to make full use of the subjective and objective structural information obtained in different steps. Therefore, by simulating the human visual mechanism, this paper proposes a novel multi-decoder matching correction network and subjective structural loss. Specifically, the loss pays different attentions to the foreground, boundary, and background of ground truth map in a top-down structure. And the perceived saliency is mapped to the corresponding objective structure of the prediction map, which is extracted in a bottom-up manner. Thus, multi-level salient features can be effectively detected with the loss as constraint. And then, through the mapping of improved binary cross entropy loss, the differences between salient regions and objects are checked to pay attention to the error prone region to achieve excellent error sensitivity. Finally, through tracking the identifying feature horizontally and vertically, the subjective and objective interaction is maximized. Extensive experiments on five benchmark datasets demonstrate that compared with 12 state-of-the-art methods, the algorithm has higher recall and precision, less error and strong robustness and generalization ability, and can predict complete and refined saliency maps.
  • loading
  • [1]
    J. J. Wu, “Image information perception and image quality evaluation based on human visual system,” Ph. D. dissertation, Xidian University, 2014.
    [2]
    G. Li, Z. Liu, R. Shi, Z. Hu, W. Wei, Y. Wu, M. Huang, and H. Ling,“Personal fixations-based object segmentation with object localization and boundary preservation,” IEEE Transactions on Image Processing, vol. 30, pp. 1461-1475, 2021. doi:10.1109/TIP.2020.3044440
    [3]
    H. Lee and D. Kim, “Salient region-based online object tracking,” in 2018 IEEE Winter Conference on Applications of Computer Vision( WACV), Harvey’s Casino in Lake Tahoe, Nevada, U. S. , pp. 1170-1177, 2018.
    [4]
    X. Hou and L. Zhang, “Saliency detection: A spectral residual approach,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 1-8, 2007.
    [5]
    J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,” in Proceedings of the 30th International Conference on Neural Information Processing Systems( NIPS16), Barcelona, Spain, pp. 379-387, 2016.
    [6]
    Q. Hou, M. Cheng, X. Hu, A. Borji, Z. Tu, and P. H. S. Torr, “Deeply supervised salient object detection with short connections,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 4, pp. 815-828, 2017.
    [7]
    T. Wang, L. Zhang, S. Wang, H. Lu, G. Yang, X. Ruan, and A. Borji, “Detect globally, refine locally: A novel approach to saliency detection,” in 2018 IEEE/ CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 3127-3135, 2018.
    [8]
    N. Liu, J. Han, and M. H. Yang,“PiCANet: Pixel-wise contextual attention learning for accurate saliency detection,” IEEE Transactions on Image Processing, vol. 29, pp. 6438-6451, 2020. doi:10.1109/TIP.2020.2988568
    [9]
    Z. Wu, L. Su, and Q. Huang, “Cascaded partial decoder for fast and accurate salient object detection,” in 2019 IEEE/ CVF Conference on Computer Vision and Pattern Recognition( CVPR), Los Angeles CA, United States, pp. 3902-3911, 2019.
    [10]
    S. Chen, X. Tan, B. Wang, and X. Hu, “Reverse attention for salient object detection,” in European Conference on Computer Vision( ECCV), Munich, Germany, pp. 236-252, 2018.
    [11]
    G. Li, Z. Liu, W. Lin, and H. Ling, “Multi-content complementation network for salient object detection in optical remote sensing images,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-13, 2022.
    [12]
    J. Li, J. Su, C. Xia, M. Ma, and Y. Tian,“Salient object detection with purificatory mechanism and structural similarity loss,” IEEE Transactions on Image Processing, vol. 30, pp. 6855-6868, 2021. doi:10.1109/TIP.2021.3099405
    [13]
    M. Sina, N, Mehrdad, B. Ali, G. Sina, and H. Mohammad, “CAGNet: Content-aware guidance for salient object detection,” Pattern Recognition, vol. 103, no. 7, pp. 107-303, 2020.
    [14]
    X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, and M. Jagersand, “BASNet: Boundary-aware salient object detection,” in 2019 IEEE/ CVF Conference on Computer Vision and Pattern Recognition( CVPR), Los Angeles, CA, United States, pp. 7471-7481, 2019.
    [15]
    M. Feng, H. Lu, and E. Ding, “Attentive feedback network for boundary-aware salient object detection,” in 2019 IEEE/ CVF Conference on Computer Vision and Pattern Recognition( CVPR), Los Angeles, CA, United States, pp. 1623-1632, 2019.
    [16]
    J. Zhao, J. J. Liu, D. P. Fan, Y. Cao, J. Yang, and M. M. Cheng, “EGNet: Edge guidance network for salient object detection,” in 2019 IEEE/ CVF International Conference on Computer Vision( ICCV), Seoul, Korea, pp. 8778-8787, 2019.
    [17]
    H. Zhou, X. Xie, J. H. Lai, Z. Chen, and L. Yang, “Interactive two-stream decoder for accurate and fast saliency detection,” in 2020 IEEE/ CVF Conference on Computer Vision and Pattern Recognition( CVPR), Seattle, USA, pp. 9138-9147, 2020.
    [18]
    X. Zhou, K. Shen, Z. Liu, C. Gong, J. Zhang, and C. Yan,“Edge-aware multiscale feature integration network for salient object detection in optical remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1, 2022.
    [19]
    G. Máttyus, W. Luo, and R. Urtasun, “Deep road mapper: Extracting road topology from aerial images,” in 2017 IEEE International Conference on Computer Vision( ICCV), Venice, Italy, pp. 3458-3466, 2017.
    [20]
    Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, pp. 1398-1402, 2003.
    [21]
    K. Zhao, S. Gao, W. Wang, and M. M. Cheng, “Optimizing the F-measure for threshold-free salient object detection,” in 2019 IEEE/ CVF International Conference on Computer Vision( ICCV), Seoul, Korea, pp. 8848-8856, 2019.
    [22]
    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition( CVPR), Las Vegas, USA, pp. 770-778, 2016.
    [23]
    S. Woo, J. Park, and J. Lee, “CBAM: Convolutional blosck attention module, ” in European Conference on Computer Vision( ECCV), Munich, Germany, pp. 3-19, 2018.
    [24]
    R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA, pp. 1597-1604, 2009.
    [25]
    R. Margolin, L. Zelnik-Manor, and A. Tal, “How to evaluate foreground maps,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 248-255, 2014.
  • 加载中

Catalog

    通讯作者:陈斌, bchen63@163.com
    • 1.

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)/Tables(3)

    Article Metrics

    Article views (15) PDF downloads(2) Cited by()
    Proportional views
    Related

    /

    Return
    Return
      Baidu
      map