留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于人眼视觉机制的伪装目标检测网络

张冬冬 王春平 付强

张冬冬,王春平,付强. 基于人眼视觉机制的伪装目标检测网络[J]. 北京麻豆精品秘 国产传媒学报,2025,51(7):2553-2561 doi: 10.13700/j.bh.1001-5965.2023.0511
引用本文: 张冬冬,王春平,付强. 基于人眼视觉机制的伪装目标检测网络[J]. 北京麻豆精品秘 国产传媒学报,2025,51(7):2553-2561 doi: 10.13700/j.bh.1001-5965.2023.0511
ZHANG D D,WANG C P,FU Q. Camouflaged object detection network based on human visual mechanisms[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(7):2553-2561 (in Chinese) doi: 10.13700/j.bh.1001-5965.2023.0511
Citation: ZHANG D D,WANG C P,FU Q. Camouflaged object detection network based on human visual mechanisms[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(7):2553-2561 (in Chinese) doi: 10.13700/j.bh.1001-5965.2023.0511

基于人眼视觉机制的伪装目标检测网络

doi: 10.13700/j.bh.1001-5965.2023.0511
详细信息
    通讯作者:

    E-mail:1418748495@qq.com

  • 中图分类号: TP753

Camouflaged object detection network based on human visual mechanisms

More Information
  • 摘要:

    伪装目标检测是一项新兴的视觉检测任务,旨在识别出完美隐藏在周围环境中的伪装目标,在多个领域中具有广泛应用。针对当前伪装目标检测算法无法准确、完整地识别目标结构和边界的问题,基于人类在观察伪装图像时的视觉感知过程,设计了一种生物启发式框架,并命名为定位和细化网络(PRNet)。利用Res2Net提取图像的原始特征,从多层级信息中挖掘目标的边缘线索;特别设计特征增强模块,在丰富全局上下文信息的同时能够扩大感受野;定位模块利用双注意力机制从通道和空间2个维度来定位目标的大致位置;细化模块同时关注前景和背景中的目标线索,利用多类型信息进一步细化目标的结构和边缘。在3个广泛使用的伪装目标检测基准数据集上的大量实验结果表明,所提网络的整体性能明显优于14种比较算法,在多种复杂场景中表现优异。

     

  • 图 1  PRNet整体框架

    Figure 1.  Overall framework of PRNet

    图 2  特征增强模块

    Figure 2.  Feature enhancement module

    图 3  定位模块

    Figure 3.  Positioning module

    图 4  细化模块

    Figure 4.  Refinement module

    图 5  特征图可视化实例

    Figure 5.  Example of feature map visualization

    图 6  不同模型的PR曲线和F-measure曲线

    Figure 6.  PR curves and F-measure curves for different models

    图 7  不同模型的检测结果

    Figure 7.  Detection results of different models

    表  1  消融实验定量结果(“↑”/“↓”分别表示值越大/小越好)

    Table  1.   Quantitative results of ablation experiments(“↑”/“↓” indicates that larger/smaller is better.)

    模型 CAMO COD10K NC4K
    S E F M S E F M S E F M
    Baseline 0.533 0.523 0.304 0.197 0.584 0.583 0.307 0.105 0.585 0.572 0.367 0.153
    Baseline+FEM 0.758 0.81 0.704 0.087 0.781 0.849 0.690 0.041 0.812 0.868 0.767 0.058
    Baseline+PM 0.639 0.664 0.507 0.145 0.666 0.709 0.481 0.078 0.69 0.719 0.575 0.109
    Baseline+RM 0.778 0.834 0.741 0.085 0.802 0.871 0.709 0.037 0.830 0.885 0.794 0.052
    PRNet 0.837 0.893 0.799 0.051 0.834 0.904 0.756 0.032 0.875 0.897 0.820 0.041
    下载: 导出CSV

    表  2  不同模型的定量检测结果

    Table  2.   Quantitative detection results of different models

    方法 CAMO COD10K NC4K
    S E F M S E F M S E F M
    SINetV2[13] 0.815 0.870 0.783 0.074 0.813 0.886 0.713 0.037 0.845 0.901 0.802 0.048
    JCSOD[32] 0.767 0.810 0.729 0.086 0.800 0.872 0.718 0.036 0.835 0.886 0.805 0.049
    Rank-Net[27] 0.785 0.842 0.736 0.081 0.786 0.863 0.672 0.043 0.823 0.883 0.771 0.054
    MGL[33] 0.570 0.499 0.302 0.182 0.635 0.584 0.341 0.111 0.662 0.596 0.428 0.136
    PFNet[12] 0.776 0.832 0.738 0.087 0.799 0.875 0.700 0.039 0.828 0.886 0.786 0.053
    OCENet[34] 0.775 0.818 0.709 0.092 0.789 0.854 0.680 0.044 0.822 0.871 0.766 0.057
    ERRNet[22] 0.767 0.801 0.671 0.104 0.744 0.801 0.578 0.063 0.792 0.833 0.692 0.077
    BgNet[23] 0.652 0.684 0.534 0.137 0.657 0.718 0.475 0.082 0.699 0.749 0.594 0.101
    ZoomNet[14] 0.797 0.842 0.765 0.076 0.819 0.864 0.742 0.032 0.838 0.878 0.803 0.047
    BSANet[20] 0.800 0.852 0.770 0.076 0.812 0.876 0.733 0.035 0.836 0.887 0.803 0.050
    C2FNet[35] 0.768 0.823 0.726 0.089 0.808 0.883 0.724 0.036 0.834 0.890 0.798 0.049
    FAPNet[16] 0.798 0.849 0.758 0.082 0.820 0.885 0.727 0.036 0.849 0.898 0.805 0.048
    MFFN[36] 0.790 0.837 0.752 0.081 0.826 0.872 0.753 0.037 0.837 0.877 0.800 0.052
    ASBI[37] 0.821 0.874 0.782 0.073 0.829 0.896 0.742 0.033 0.857 0.907 0.818 0.044
    本文 0.837 0.893 0.799 0.051 0.834 0.904 0.756 0.032 0.875 0.897 0.820 0.041
    下载: 导出CSV
  • [1] CHUDZIK P, MITCHELL A, ALKASEEM M, et al. Mobile real-time grasshopper detection and data aggregation framework[J]. Scientific Reports, 2020, 10(1): 1150. doi: 10.1038/s41598-020-57674-8
    [2] CHEN S M, XIONG J T, JIAO J M, et al. Citrus fruits maturity detection in natural environments based on convolutional neural networks and visual saliency map[J]. Precision Agriculture, 2022, 23(5): 1515-1531. doi: 10.1007/s11119-022-09895-2
    [3] LIN Z S, YE H X, ZHAN B, et al. An efficient network for surface defect detection[J]. Applied Sciences, 2020, 10(17): 6085. doi: 10.3390/app10176085
    [4] JI G P, CHOU Y C, FAN D P, et al. Progressively normalized self-attention network for video polyp segmentation[C]//Proceedings of the Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Cham: Springer, 2021: 142-152.
    [5] WU Y H, GAO S H, MEI J, et al. JCS: an explainable COVID-19 diagnosis system by joint classification and segmentation[J]. IEEE Transactions on Image Processing, 2021, 30: 3113-3126. doi: 10.1109/TIP.2021.3058783
    [6] YANG Z Y, DAI Q H, ZHANG J S. Visual perception driven collage synthesis[J]. Computational Visual Media, 2022, 8(1): 79-91. doi: 10.1007/s41095-021-0226-8
    [7] HORVÁTH G, PERESZLÉNYI Á, ÅKESSON S, et al. Striped bodypainting protects against horseflies[J]. Royal Society Open Science, 2019, 6(1): 181325. doi: 10.1098/rsos.181325
    [8] MERILAITA S, SCOTT-SAMUEL N E, CUTHILL I C. How camouflage works[J]. Philosophical Transactions of the Royal Society B: Biological Sciences, 2017, 372(1724): 20160341. doi: 10.1098/rstb.2016.0341
    [9] SENGOTTUVELAN P, WAHI A, SHANMUGAM A. Performance of decamouflaging through exploratory image analysis[C]// 2008 First International Conference on Emerging Trends in Engineering and Technology. Piscataway: IEEE Press, 2008: 6-10.
    [10] YIN J Q, HAN Y B, HOU W D, et al. Detection of the mobile object with camouflage color under dynamic background based on optical flow[J]. Procedia Engineering, 2011, 15: 2201-2205. doi: 10.1016/j.proeng.2011.08.412
    [11] FAN D P, JI G P, SUN G L, et al. Camouflaged object detection[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 2774-2784.
    [12] MEI H Y, JI G P, WEI Z Q, et al. Camouflaged object segmentation with distraction mining[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 8768-8777.
    [13] FAN D P, JI G P, CHENG M M, et al. Concealed object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(10): 6024-6042. doi: 10.1109/TPAMI.2021.3085766
    [14] PANG Y W, ZHAO X Q, XIANG T Z, et al. Zoom in and out: a mixed-scale triplet network for camouflaged object detection[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2022: 2150-2160.
    [15] REN J J, HU X W, ZHU L, et al. Deep texture-aware features for camouflaged object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(3): 1157-1167. doi: 10.1109/TCSVT.2021.3126591
    [16] ZHOU T, ZHOU Y, GONG C, et al. Feature aggregation and propagation network for camouflaged object detection[J]. IEEE Transactions on Image Processing, 2022, 31: 7036-7047. doi: 10.1109/TIP.2022.3217695
    [17] GAO S H, CHENG M M, ZHAO K, et al. Res2Net: a new multi-scale backbone architecture[J]. IEEE Trans Pattern Anal Mach Intell, 2021, 43(2): 652-662. doi: 10.1109/TPAMI.2019.2938758
    [18] SUN Y J, WANG S, CHEN C, et al. Boundary-guided camouflaged object detection[EB/OL]. (2022-07-02)[2023-07-25]. http://doi.org/10.48550/arXiv.2207.00794.
    [19] WANG K, BI H B, ZHANG Y, et al. D2C-Net: a dual-branch, dual-guidance and cross-refine network for camouflaged object detection[J]. IEEE Transactions on Industrial Electronics, 2022, 69(5): 5364-5374. doi: 10.1109/TIE.2021.3078379
    [20] ZHU H W, LI P, XIE H R, et al. I can find you! boundary-guided separated attention network for camouflaged object detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(3): 3608-3616. doi: 10.1609/aaai.v36i3.20273
    [21] QIN X B, ZHANG Z C, HUANG C Y, et al. BASNet: boundary-aware salient object detection[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 7471-7481.
    [22] JI G P, ZHU L, ZHUGE M C, et al. Fast camouflaged object detection via edge-based reversible re-calibration network[J]. Pattern Recognition, 2022, 123: 108414. doi: 10.1016/j.patcog.2021.108414
    [23] CHEN T Y, XIAO J, HU X G, et al. Boundary-guided network for camouflaged object detection[J]. Knowledge-Based Systems, 2022, 248: 108901. doi: 10.1016/j.knosys.2022.108901
    [24] WEI J, WANG S H, HUANG Q M. F³Net: fusion, feedback and focus for salient object detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12321-12328. doi: 10.1609/aaai.v34i07.6916
    [25] XIE E Z, WANG W J, WANG W H, et al. Segmenting transparent objects in the wild[C]//Proceedings of the Computer Vision – ECCV 2020. Berlin: Springer, 2020: 696-711.
    [26] LE T N, NGUYEN T V, NIE Z L, et al. Anabranch network for camouflaged object segmentation[J]. Computer Vision and Image Understanding, 2019, 184: 45-56. doi: 10.1016/j.cviu.2019.04.006
    [27] LV Y Q, ZHANG J, DAI Y C, et al. Simultaneously localize, segment and rank the camouflaged objects[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 11586-11596.
    [28] ACHANTA R, HEMAMI S, ESTRADA F, et al. Frequency-tuned salient region detection[C]//Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2009: 1597-1604.
    [29] CHENG M M, FAN D P. Structure-measure: a new way to evaluate foreground maps[J]. International Journal of Computer Vision, 2021, 129(9): 2622-2638. doi: 10.1007/s11263-021-01490-8
    [30] FAN D P, JI G P, QIN X B, et al. Cognitive vision inspired object segmentation metric and loss function[J]. Scientia Sinica Informationis, 2021, 51(9): 1475. doi: 10.1360/SSI-2020-0370
    [31] PERAZZI F, KRÄHENBÜHL P, PRITCH Y, et al. Saliency filters: Contrast based filtering for salient region detection[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2012: 733-740.
    [32] LI A X, ZHANG J, LV Y Q, et al. Uncertainty-aware joint salient object and camouflaged object detection[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 10066-10076.
    [33] ZHAI Q, LI X, YANG F, et al. Mutual graph learning for camouflaged object detection[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 12992-13002.
    [34] LIU J W, ZHANG J, BARNES N. Confidence-aware learning for camouflaged object detection[EB/OL]. (2021-06-22)[2023-07-13]. http://doi.org/10.48550/arXiv.2106.11641.
    [35] CHEN G, LIU S J, SUN Y J, et al. Camouflaged object detection via context-aware cross-level fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(10): 6981-6993. doi: 10.1109/TCSVT.2022.3178173
    [36] ZHENG D H, ZHENG X C, YANG L T, et al. MFFN: multi-view feature fusion network for camouflaged object detection[EB/OL]. (2021-06-22)[2023-07-15]. http://doi.org/10.48550/arXiv.2210.06361.
    [37] ZHANG Q, SUN X X, CHEN Y R, et al. Attention-induced semantic and boundary interaction network for camouflaged object detection[J]. Computer Vision and Image Understanding, 2023, 233(8): 103719. doi: 10.1016/j.cviu.2023.103719
  • 加载中
图(7) / 表(2)
计量
  • 文章访问数:  273
  • HTML全文浏览量:  142
  • PDF下载量:  19
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-08-04
  • 录用日期:  2023-10-05
  • 网络出版日期:  2023-11-01
  • 整期出版日期:  2025-07-31

目录

    /

    返回文章
    返回
    常见问答