<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
  • 《工程索引》(EI)刊源期刊
  • 中文核心期刊
  • 中國科技論文統計源期刊
  • 中國科學引文數據庫來源期刊

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

基于YOLOX-drone的反無人機系統抗遮擋目標檢測算法

薛珊 王亞博 呂瓊瑩 曹國華

薛珊, 王亞博, 呂瓊瑩, 曹國華. 基于YOLOX-drone的反無人機系統抗遮擋目標檢測算法[J]. 工程科學學報, 2023, 45(9): 1539-1549. doi: 10.13374/j.issn2095-9389.2022.10.24.004
引用本文: 薛珊, 王亞博, 呂瓊瑩, 曹國華. 基于YOLOX-drone的反無人機系統抗遮擋目標檢測算法[J]. 工程科學學報, 2023, 45(9): 1539-1549. doi: 10.13374/j.issn2095-9389.2022.10.24.004
XUE Shan, WANG Yabo, Lü Qiongying, CAO Guohua. Anti-occlusion target detection algorithm for anti-UAV system based on YOLOX-drone[J]. Chinese Journal of Engineering, 2023, 45(9): 1539-1549. doi: 10.13374/j.issn2095-9389.2022.10.24.004
Citation: XUE Shan, WANG Yabo, Lü Qiongying, CAO Guohua. Anti-occlusion target detection algorithm for anti-UAV system based on YOLOX-drone[J]. Chinese Journal of Engineering, 2023, 45(9): 1539-1549. doi: 10.13374/j.issn2095-9389.2022.10.24.004

基于YOLOX-drone的反無人機系統抗遮擋目標檢測算法

doi: 10.13374/j.issn2095-9389.2022.10.24.004
基金項目: 吉林省科技廳重點科技研發項目(20180201058SF);吉林省教育廳科學技術研究項目(JJKH20210812KJ)
詳細信息
    通訊作者:

    E-mail: 1660348815@qq.com

  • 中圖分類號: TP391.4

Anti-occlusion target detection algorithm for anti-UAV system based on YOLOX-drone

More Information
  • 摘要: 為解決現實場景下無人機目標被部分遮擋,導致不易檢測問題,本文提出了基于YOLOX-S改進的反無人機系統目標檢測算法YOLOX-drone。首先,建立無人機圖像數據集;其次,搭建YOLOX-S目標檢測網絡,在此基礎上引入坐標注意力機制,來增強無人機的目標圖像顯著度,突出有用特征抑制無用特征;然后,再去除特征融合層中自下而上的路徑增強結構,減少網絡復雜度,并設計了自適應特征融合網絡結構,增強有用特征的表達能力,抑制干擾,提升檢測精度。在DUT-Anti-UAV數據集上的測試結果表明:YOLOX-drone與YOLOX-S、YOLOv5-S和YOLOX-tiny相比,平均準確率(IoU=0.5)提升了3.2%、4.7%和10.1%;在自建的無人機圖像數據集上的測試結果表明:YOLOX-drone與原YOLOX-S目標檢測模型相比,在無遮擋、一般遮擋、嚴重遮擋情況下,平均準確率(IoU=0.5)分別提高了2.4%、2.1%和6.4%,驗證了改進的算法具有良好的抗遮擋檢測能力。

     

  • 圖  1  YOLOX-S網絡結構圖

    Figure  1.  YOLOX-S network structure diagram

    圖  2  YOLOX-drone網絡結構圖

    Figure  2.  YOLOX-drone network structure diagram

    圖  3  坐標注意力機制結構圖

    Figure  3.  Coordinate attention mechanism structure diagram

    圖  4  ASCFM網絡結構示意圖

    Figure  4.  ASCFM network structure diagram

    圖  5  ASFM網絡結構示意圖

    Figure  5.  ASFM network structure diagram

    圖  6  ACFM網絡結構示意圖

    Figure  6.  ACFM network structure diagram

    圖  7  無人機數據集部分圖片

    Figure  7.  Images from the drone dataset

    圖  8  合成的無人機遮擋圖片

    Figure  8.  Composite drone occlusion images

    圖  9  改進前后CAM可視化圖. (a)原圖; (b)未加CA注意力機制; (c)加入CA注意力機制

    Figure  9.  CAM visualization results before and after improvement: (a) original picture; (b) no coordinate attention mechanism; (c) add coordinate attention mechanism

    圖  10  特征融合層改進前后檢測結果對比圖. (a) YOLOX-S檢測結果; (b) YOLOX(FPN+ASCFM)檢測結果

    Figure  10.  Performance comparison before and after improvement of feature fusion layer: (a) YOLOX-S test results; (b) YOLOX (FPN + ASCFM) test results

    圖  11  YOLOX-S與YOLOX-drone檢測結果對比圖. (a) YOLOX-S檢測結果; (b) YOLOX-drone檢測結果

    Figure  11.  Comparison of YOLOX-S and YOLOX-drone test results: (a) YOLOX-S test results; (b) YOLOX-drone test results

    表  1  特征圖縮放表

    Table  1.   Feature map scaling table

    Feature mapScale_0Scale_1Scale_2
    X0Up sample(s=2)Up sample(s=4)
    X1Down sample(s=2)Up sample(s=2)
    X2Down sample(s=4)Down sample(s=2)
    下載: 導出CSV

    表  2  實驗條件設置表

    Table  2.   Experimental condition setting table

    N(No occlusion)R(Reasonable)HO(Heavy occlusion)
    v=0v$ \in $(0, 0.35]v$ \in $(0.35,0.6]
    下載: 導出CSV

    表  3  引入不同注意力機制檢測性能對比表/mAP@0.5

    Table  3.   Comparison table of detection performance with different attention mechanisms/mAP@0.5 %

    ModelNRHON+R+HO
    YOLOX-S89.382.858.177.5
    +SE89.683.561.279.1
    +ECA89.983.261.079.1
    +CBAM90.083.361.679.8
    +CA91.483.861.780.1
    下載: 導出CSV

    表  4  YOLOX-S采用不同特征融合層檢測性能對比表mAP@0.5

    Table  4.   YOLOX-S using different feature fusion layer detection performance comparison table/mAP@0.5 %

    Feature fusion methodNRHON+R+HO
    PANet89.382.858.177.5
    FPN89.683.260.478.5
    ASCFM90.484.364.380.5
    PANet+ASCFM89.984.962.680.3
    FPN+ASCFM90.884.964.480.8
    下載: 導出CSV

    表  5  ASCFM模塊消融實驗表/mAP@0.5

    Table  5.   ASCFM module ablation experiment table/mAP@0.5 %

    Feature fusion methodNRHON+R+HO
    FPN89.683.260.478.5
    FPN+ASFM89.683.461.979.5
    FPN+ACFM89.984.862.680.0
    FPN+ASCFM90.884.964.480.8
    下載: 導出CSV

    表  6  各個模塊逐步添加模型檢測性能表/mAP@0.5

    Table  6.   Step-by-step addition of model checking performance tables for each module/mAP@0.5 %

    FPNASCFMCANRHON+R+HO
    89.382.858.177.5
    89.683.260.478.5
    90.884.964.480.8
    91.784.964.581.3
    下載: 導出CSV

    表  7  改進前后網絡復雜性對比表

    Table  7.   Comparison table of network complexity before and after improvement

    ModelParams(M)GflopsTimes/ms
    YOLOX-S8.9426.6416.67
    YOLOX-drone10.8229.9120.41
    下載: 導出CSV

    表  8  經典檢測算法檢測性能對比表

    Table  8.   Classical detection algorithm detection performance comparison table

    ModelParams(M)GflopsmAP@0.5/%Times/ms
    YOLOX-tiny5.036.4083.33.94
    YOLOv5-S7.0115.8088.713.00
    YOLOX-S8.9426.6490.216.67
    YOLOX-drone10.8229.9193.420.41
    下載: 導出CSV
    <span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    <span id="fpn9h"><noframes id="fpn9h">
    <th id="fpn9h"></th>
    <strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
    <th id="fpn9h"><noframes id="fpn9h">
    <span id="fpn9h"><video id="fpn9h"></video></span>
    <ruby id="fpn9h"></ruby>
    <strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    www.77susu.com
  • [1] Xue S, Li G Q, Lü Q Y, et al. Sound recognition method of an anti-UAV system based on a convolutional neural network. Chin J Eng, 2020, 42(11): 1516

    薛珊, 李廣青, 呂瓊瑩, 等. 基于卷積神經網絡的反無人機系統聲音識別方法. 工程科學學報, 2020, 42(11):1516
    [2] Chen Q Q, Feng Z W, Zhang G B, et al. Dynamic modeling and simulation of anti-UAV tethered-net capture system. J Natl Univ Def Technol, 2022, 44(2): 9 doi: 10.11887/j.cn.202202002

    陳青全, 豐志偉, 張國斌, 等. 反無人機繩網捕獲系統的動力學建模與仿真. 國防科技大學學報, 2022, 44(2):9 doi: 10.11887/j.cn.202202002
    [3] Xue S, Wei L W, Gu C Y, et al. A recognition method for drone based on mixed domain attention mechanism. J Xi'an Jiaotong Univ, http://kns.cnki.net/kcms/detail/61.1069.t.20220701.1417.004.html

    薛珊, 衛立煒, 顧宸瑜, 等. 采用混合域注意力機制的無人機識別方法. 西安交通大學學報, http://kns.cnki.net/kcms/detail/61.1069.t.20220701.1417.004.html
    [4] Luo J H, Wang Z Y. A review of development and application of UAV detection and counter technology. Control Decis, 2022, 37(3): 530

    羅俊海, 王芝燕. 無人機探測與對抗技術發展及應用綜述. 控制與決策, 2022, 37(3):530
    [5] Wang X. Research on Real-time Detection and Tracking Algorithm of UAV in Video [Dissertation]. Harbin: Harbin Engineering University, 2019

    王曉. 視頻中無人機的實時檢測與跟蹤算法研究[學位論文]. 哈爾濱: 哈爾濱工程大學, 2019
    [6] Shao P Y. Design and Implementation of Vision Based Drone Intrusion and Tracking System [Dissertation]. Hangzhou: Zhejiang University, 2018

    邵盼愉. 基于視覺的無人機入侵檢測與跟蹤系統設計與實現[學位論文]. 杭州: 浙江大學, 2018
    [7] Zhao H, Li Z W, Zhang T Q. Attention based single shot multibox detector. J Electron &Inf Technol, 2021, 43(7): 2096 doi: 10.11999/JEIT200304

    趙輝, 李志偉, 張天琪. 基于注意力機制的單發多框檢測器算法. 電子與信息學報, 2021, 43(7):2096 doi: 10.11999/JEIT200304
    [8] Tao L, Hong T, Chao X. Drone identification and location tracking based on YOLOv3. Chin J Eng, 2020, 42(4): 463

    陶磊, 洪韜, 鈔旭. 基于YOLOv3的無人機識別與定位追蹤. 工程科學學報, 2020, 42(4):463
    [9] Cui Y P, Wang Y H, Hu J W. Detection method for a dynamic small target using the improved YOLOv3. J Xidian Univ, 2020, 47(3): 1 doi: 10.19665/j.issn1001-2400.2020.03.001

    崔艷鵬, 王元皓, 胡建偉. 一種改進YOLOv3的動態小目標檢測方法. 西安電子科技大學學報, 2020, 47(3):1 doi: 10.19665/j.issn1001-2400.2020.03.001
    [10] Zhang R X, Li N, Zhang X X, et al. Low-altitude UAV detection method based on optimized CenterNet. J Beijing Univ Aeronaut Astronaut, 2022, 48(11): 2335

    張瑞鑫, 黎寧, 張夏夏, 等. 基于優化CenterNet的低空無人機檢測方法. 北京航空航天大學學報, 2022, 48(11):2335
    [11] Ge Z, Liu S T, Wang F, et al. Yolox: Exceeding yolo series in 2021 [J/OL]. arXir preprint online (2021-8-6) [2022-10-21]. https://arxiv.org/abs/2107.08430
    [12] Hou Q B, Zhou D Q, Feng J S. Coordinate attention for efficient mobile network design // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, 2021: 13708
    [13] Duan K W, Bai S, Xie L X, et al. Centernet: Keypoint triplets for object detection // Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul, 2019: 6568
    [14] Bochkovskiy A, Wang C Y, Liao H Y M. Yolov4: Optimal speed and accuracy of object detection[J/OL]. arXir preprint online (2020-4-23) [2022-10-21]. https://arxiv.org/abs/2004.10934
    [15] Zhang H Y, Cisse M, Dauphin Y N, et al. Mixup: Beyond empirical risk minimization [J/OL]. arXir preprint online (2018-4-27) [2022-10-21]. https://arxiv.org/abs/1710.09412
    [16] Wang C Y, Liao H Y M, Wu Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, 2020: 1571
    [17] He K M, Zhang X Y, Ren S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell, 2015, 37(9): 1904 doi: 10.1109/TPAMI.2015.2389824
    [18] Liu S, Qi L, Qin H, et al. Path aggregation network for instance segmentation // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 8759
    [19] Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection. // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, 2017: 936
    [20] Chen Y, Liu X, Liu H L. Occluded pedestrian detection based on joint attention mechanism of channel-wise and spatial information. J Electron &Inf Technol, 2020, 42(6): 1486 doi: 10.11999/JEIT190606

    陳勇, 劉曦, 劉煥淋. 基于特征通道和空間聯合注意機制的遮擋行人檢測方法. 電子與信息學報, 2020, 42(6):1486 doi: 10.11999/JEIT190606
    [21] Liu S T, Huang D, Wang Y H. Learning spatial fusion for single-shot object detection [J/OL]. arXir preprint online (2019-11-25) [2022-10-21]. https://arxiv.org/abs/1911.09516
    [22] Woo S, Park J, Lee J Y, et al. CBAM: Convolutional block attention module // The 15th European Conference on Computer Vision. Munich, 2018: 3
    [23] Zhang S S, Chen D, Yang J, et al. Guided attention in CNNs for occluded pedestrian detection and re-identification. Int J Comput Vis, 2021, 129(6): 1875 doi: 10.1007/s11263-021-01461-z
    [24] Hu J, Shen L, Sun G. Squeeze-and-excitation networks // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7132
    [25] Wang Q L, Wu B G, Zhu P F, et al. ECA-net: Efficient channel attention for deep convolutional neural networks // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle, 2020: 11531
    [26] Zhou B L, Khosla A, Lapedriza A, et al. Learning deep features for discriminative localization // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, 2016: 2921
    [27] Zhao J, Zhang J S, Li D D, et al. Vision-based anti-UAV detection and tracking. IEEE Trans Intell Transp Syst, 2022, 23(12): 25323 doi: 10.1109/TITS.2022.3177627
  • 加載中
圖(11) / 表(8)
計量
  • 文章訪問數:  538
  • HTML全文瀏覽量:  256
  • PDF下載量:  143
  • 被引次數: 0
出版歷程
  • 收稿日期:  2022-10-24
  • 網絡出版日期:  2022-12-08
  • 刊出日期:  2023-09-25

目錄

    /

    返回文章
    返回