-
摘要: 宮頸癌是嚴重危害婦女健康的惡性腫瘤,威脅著女性的生命,而通過基于圖像處理的細胞學篩查是癌前篩查的最為廣泛的檢測方法。近年來,隨著以深度學習為代表的機器學習理論的發展,卷積神經網絡以其強有效的特征提取能力取得了圖像識別領域的革命性突破,被廣泛應用于宮頸異常細胞檢測等醫療影像分析領域。但由于病理細胞圖像具有分辨率高和尺寸大的特點,且其大多數局部區域內都不含有細胞簇,深度學習模型采用窮舉候選框的方法進行異常細胞的定位和識別時,經過窮舉候選框獲得的子圖大部分都不含有細胞簇。當子圖數量逐漸增加時,大量不含細胞簇的圖像作為目標檢測網絡輸入會使圖像分析過程存在冗余時長,嚴重減緩了超大尺寸病理圖像分析時的檢測速度。本文提出一種新的宮頸癌異常細胞檢測策略,針對使用膜式法獲得的病理細胞圖像,通過基于深度學習的圖像分類網絡首先判斷局部區域是否出現異常細胞,若出現則進一步使用單階段的目標檢測方法進行分析,從而快速對異常細胞進行精確定位和識別。實驗表明,本文提出的方法可提高一倍的宮頸癌異常細胞檢測速度。Abstract: Cervical cancer is a malignant tumor that highly endangers women’s lives. Cytological screening based on image processing is the most widely used detection method for precancerous screening. Recently, with the development of machine learning theory based on deep learning, the convolutional neural network has made a revolutionary breakthrough in the field of image recognition due to its strong and effective extraction ability. In addition, it has been widely used in the field of medical image analysis such as cervical abnormal cell detection. However, due to the characteristic high resolution and large size of pathological cell images, most of its local areas do not contain cell clusters. Moreover, when the deep learning model uses the method of exhausting candidate boxes to locate and identify abnormal cells, most of the sub-images obtained do not contain cell clusters. When the number of sub-images increases gradually, a large number of images without cell clusters as input to the object detection network will make the image analysis process redundant for a long time, which drastically slows down the speed of detection of the large-scale pathological image analysis. In view of this, this paper proposed a new detection strategy for abnormal cells in cervical cancer microscopic imaging. According to the pathological cell images obtained by the membrane method, the image classification network based on deep learning was first used to determine whether there were abnormal cells in the local area. If there were abnormal cells in the local area, the single-stage object detection method was used for further pathological cell image analysis, so that the abnormal cells in the images could be quickly and accurately located and identified. Experimental results show that the proposed method can double the speed of detection of cervical cancer abnormal cells.
-
表 1 圖像標注情況
Table 1. Image annotation
Category Number ASC-US 2032 ASC-H 1156 LSIL 4387 HSIL 1389 Total 8964 表 2 細胞簇圖像分類實驗
Table 2. Cell cluster image classification experiment
Model Accuracy/% True negative rate/% True positive rate/% Average time consumption/s Params/MB Memory Cost/GB Resnet50 89.01 96.09 86.93 0.017 22.51 4.12 Resnet101 89.62 89.39 91.46 0.027 42.50 7.85 SE-Resnext50 84.59 96.09 79.90 0.016 27.56 4.28 SE-Resnext101 82.50 91.62 79.90 0.033 48.96 8.05 Efficientnet-b4 75.71 98.88 57.29 0.027 19.43 5.12 Efficientnet-b7 83.41 98.88 68.84 0.043 66.52 25.32 Resnext50_32X4d 88.25 94.41 88.44 0.012 25.03 4.29 Resnext101_32X4d 87.20 89.94 91.46 0.025 44.18 8.03 SE-Resnet101 82.50 92.18 79.40 0.023 49.33 7.63 SE-Resnet50 85.11 88.83 86.43 0.011 28.09 3.9 Nasnet 85.37 99.44 72.36 0.038 88.75 24.04 Shufflenetv2 81.46 0 99.50 0.010 7.39 0.60 Inceptionv4 81.72 99.44 0 0.024 42.68 12.31 Xception 78.85 99.44 99.50 0.015 22.86 8.42 Densenet121 80.58 94.41 56.28 0.021 7.98 2.88 表 3 模型推理時間對比實驗
Table 3. Comparison experiment for model reasoning time
Single stage model Time consumption/s Param/MB Double stage models Time consumption/s Param/MB Faster RCNN 2775 40.1 Resnet50+Faster RCNN 1089 62.61 Cascade RCNN 2877 65.9 Resnet50+Cascade RCNN 1178 88.41 Libra RCNN 3118 41.6 Resnet50+Libra RCNN 1496 64.11 Tridentnet 4469 33.1 Resnet50+Tridentnet 2106 55.61 Foveabox 2437 36.0 Resnet50+Foveabox 1189 58.51 ATSS 3014 31.2 Resnet50+ATSS 1450 53.71 YoloV5 1386 45.7 Resnet50+YoloV5 695 68.21 表 4 模型識別精度對比實驗
Table 4. Comparison experiment for model recognition accuracy
Single stage model AP50/% Double stage models AP50/% Faster RCNN 70.1 Resnet50+Faster RCNN 66.8 Cascade RCNN 69.2 Resnet50+Cascade RCNN 65.7 Libra RCNN 68.3 Resnet50+Libra RCNN 67.0 Tridentnet 65.7 Resnet50+Tridentnet 59.7 Foveabox 67.3 Resnet50+Foveabox 61.9 ATSS 63.8 Resnet50+ATSS 63.5 YoloV5 75.3 Resnet50+YoloV5 70.1 www.77susu.com -
參考文獻
[1] Ma D, Xi L. Special discussion on cervical cancer: Research Progress in epidemiology and etiology of cervical cancer. J Pract Obstet Gynecol, 2001, 17(2): 61 doi: 10.3969/j.issn.1003-6946.2001.02.001馬丁, 奚玲. 宮頸癌專題討論——宮頸癌流行病學及病因學研究進展. 實用婦產科雜志, 2001, 17(2):61 doi: 10.3969/j.issn.1003-6946.2001.02.001 [2] Li C C, Zhu L. The cause of cervical cancer and the status of cervical cancer vaccine. J Mod Oncol, 2018, 26(20): 3333 doi: 10.3969/j.issn.1672-4992.2018.20.040李聰聰, 朱莉. 宮頸癌的病因及宮頸癌疫苗現狀. 現代腫瘤醫學, 2018, 26(20):3333 doi: 10.3969/j.issn.1672-4992.2018.20.040 [3] Wu S C. Significance of gynecological census in prevention of cervical cancer. Guide China Med, 2011, 9(27): 91 doi: 10.3969/j.issn.1671-8194.2011.27.065吳三春. 婦科普查對預防宮頸癌的意義. 中國醫藥指南, 2011, 9(27):91 doi: 10.3969/j.issn.1671-8194.2011.27.065 [4] Zhang W H, Li N, Wu L Y. Attention should be paid on the trend of carcinoma of the cervix in young women. Zhejiang Cancer J, 2000, 6(2): 112章文華, 李楠, 吳令英. 重視宮頸癌患者年輕化的趨勢. 浙江腫瘤, 2000, 6(2):112 [5] Lin Y X. Clinical Pathological Features and Prognosis Analysis of Cervical Cancer [Dissertation]. Nanning: Guangxi Medical University, 2013林泳秀. 宮頸癌的臨床病理特點與預后分析[學位論文]. 南寧: 廣西醫科大學, 2013 [6] Wang L, Zhao W X, Zhao X L, et al. A comparative study of two methods of smear-making: Membrane-based and sedimentation in liquid-based cervical cytology. Med Pharm J Chin People’s Liberation Army, 2011, 23(4): 6 doi: 10.3969/j.issn.2095-140X.2011.04.003王力, 趙穩興, 趙璽龍, 等. 膜式和沉降式宮頸液基細胞學制片方法的比較研究. 解放軍醫藥雜志, 2011, 23(4):6 doi: 10.3969/j.issn.2095-140X.2011.04.003 [7] Zhao C Q, Zhou X R, Sui L, et al. Cervical Cancer Screening and Clinical Management (Cytology, Histology, Colposcopy). Beijing: Beijing Science and Technology Press, 2017趙澄泉, 周先榮, 隋龍, 等. 宮頸癌篩查及臨床處理(細胞學、組織學和陰道鏡學). 北京: 北京科學技術出版社, 2017 [8] Wei L H. Standardized Training Materials for Colposcopy and Cervical Cytopathology. Beijing: People’s Medical Publishing House, 2020魏麗惠. 陰道鏡及宮頸細胞病理學規范化培訓教材. 北京: 人民衛生出版社, 2020 [9] Wang Y Y, Wang Y, Qiao Y L, et al. The future of cervical cancer screening in China?Abandonment or conservation of cytological primary screening. Chin J Pract Gynecol Obstet, 2017, 33(3): 324王軼英, 王悅, 喬友林, 等. 中國宮頸癌篩查未來之路——細胞學初篩的棄或守. 中國實用婦科與產科雜志, 2017, 33(3):324 [10] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436 doi: 10.1038/nature14539 [11] Szeliski R. Computer Vision. London: Springer London Press, 2011 [12] Zhang Y Z. Image Engineering. 4th Ed. Beijing: Tsinghua University Press, 2018章毓晉. 圖像工程. 4版. 北京: 清華大學出版社, 2018 [13] Shi Y H. Study of Machine Learning Techniques and Applications in Med-Ical Image Analysis [Dissertation]. Nanjing: Nanjing University, 2013史穎歡. 醫學圖像處理中的機器學習方法及其應用研究[學位論文]. 南京: 南京大學, 2013 [14] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533 doi: 10.1038/323533a0 [15] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, 2016: 770 [16] Cai W B. Study on Detection Techniques of Pathological Images of Cancer Cells [Dissertation]. Taiyuan: North University of China, 2018蔡武斌. 癌細胞病理圖像的檢測技術研究[學位論文]. 太原: 中北大學, 2018 [17] Kermany D S, Goldbaum M, Cai W J, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, 2018, 172(5): 1122 doi: 10.1016/j.cell.2018.02.010 [18] Cong M, Wu T, Liu D, et al. Prostate MR/TRUS image segmentation and registration methods based on supervised learning. Chin J Eng, 2020, 42(10): 1362叢明, 吳童, 劉冬, 等. 基于監督學習的前列腺MR/TRUS圖像分割和配準方法. 工程科學學報, 2020, 42(10):1362 [19] Zhang L, Le L, Nogues I, et al. DeepPap: deep convolutional networks for cervical cell classification. IEEE J Biomed Heal Inform, 2017, 21(6): 1633 doi: 10.1109/JBHI.2017.2705583 [20] Wu M, Yan C B, Liu H Q, et al. Automatic classification of cervical cancer from cytological images by using convolutional neural network. Biosci Rep, 2018, 38(6): BSR20181769 doi: 10.1042/BSR20181769 [21] Jia D Y, Li Z Y, Zhang C W. Detection of cervical cancer cells based on strong feature CNN-SVM network. Neurocomputing, 2020, 411: 112 doi: 10.1016/j.neucom.2020.06.006 [22] Ren S Q, He K M, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell, 2017, 39(6): 1137 doi: 10.1109/TPAMI.2016.2577031 [23] Chen X H. The Bethesda System for Reporting Cervical Cytology (Definitions, Criteria, and Explanatory Notes). 3rd Ed. Beijing: Science press, 2019陳小槐. 子宮頸細胞學Bethesda報告系統(中文翻譯版, 原書第3版). 北京: 科學出版社, 2019 [24] He Y E, Wang Y F, Lang J H, et al. The evaluation of computer cytological test with colposcopy for the diagnosis of cervical lesions. Chin J Obstet Gynecol, 1998, 33(5): 265 doi: 10.3760/j.issn:0529-567X.1998.05.002賀又娥, 王友芳, 郎景和, 等. 計算機輔助細胞檢測系統配合陰道鏡檢查對子宮頸病變的診斷價值. 中華婦產科雜志, 1998, 33(5):265 doi: 10.3760/j.issn:0529-567X.1998.05.002 [25] Stoler M H. Advances in cervical screening technology. Mod Pathol, 2000, 13(3): 275 doi: 10.1038/modpathol.3880048 [26] Zhou L P. Cervical Smears Automatic Auxiliary System Related to Image Interpretation Techniques [Dissertation]. Qingdao: Ocean University of China, 2009周立平. 宮頸涂片自動輔助判讀系統中相關圖像技術的研究[學位論文]. 青島: 中國海洋大學, 2009 [27] Zheng K, Zhang S, Tang J Q. Application of Thinprep computer-assisted imaging system in cervical cytology. Chin J Diagn Pathol, 2015, 22(6): 364 doi: 10.3969/j.issn.1007-8096.2015.06.013鄭珂, 張聲, 唐堅清. 計算機輔助閱片系統在宮頸細胞學篩查中的應用. 診斷病理學雜志, 2015, 22(6):364 doi: 10.3969/j.issn.1007-8096.2015.06.013 [28] Lin T Y, Maire M, Belongie S, et al. Microsoft COCO: common objects in context // Computer Vision – ECCV 2014. Zurich, 2014: 740 [29] Paszke A, Gross S, Massa F, et al. Pytorch: An imperative style, high-performance deep learning library[J/OL]. arXiv preprint (2019-12-3) [2021-6-24]. https://arxiv.org/abs/1912.01703 [30] Mao X Y, Leng X F. Introduction to Opencv3 programming. Beijing: Electronics industry publishing house, 2015毛星云, 冷雪飛. Opencv3編程入門. 北京: 電子工業出版社, 2015 [31] Huang G, Liu Z, Pleiss G, et al. Convolutional networks with dense connectivity. IEEE Trans Pattern Anal Mach Intell, 2019, doi: 10.1109/TPAMI.2019.2918284 [32] Hu J, Shen L, Sun G. Squeeze-and-excitation networks // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7132 [33] Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks // International Conference on Machine Learning. Long Beach, 2019: 6105 [34] Xie S N, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks // 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, 2017: 5987 [35] Zoph B, Vasudevan V, Shlens J, et al. Learning transferable architectures for scalable image recognition // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 8697 [36] Ma N N, Zhang X Y, Zheng H T, et al. ShuffleNet V2: practical guidelines for efficient CNN architecture design // Computer Vision – ECCV 2018. Munich, 2018: 122 [37] Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-ResNet and the impact of residual connections on learning // Proceedings of the AAAI Conference on Artificial Intelligence. San Francisco, 2017(31): 1 [38] Chollet F. Xception: deep learning with depthwise separable convolutions // 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, 2017: 1800 [39] Cai Z W, Vasconcelos N. Cascade R-CNN: Delving into high quality object detection // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 6154 [40] Pang J M, Chen K, Shi J P, et al. Libra R-CNN: Towards balanced learning for object detection // 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, 2019: 821 [41] Li Y H, Chen Y T, Wang N Y, et al. Scale-aware trident networks for object detection // 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, 2019: 6053 [42] Kong T, Sun F C, Liu H P, et al. FoveaBox: beyound anchor-based object detection. IEEE Trans Image Process, 2020, 29: 7389 doi: 10.1109/TIP.2020.3002345 [43] Zhang S F, Chi C, Yao Y Q, et al. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection // 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, 2020: 9756 [44] Ultralytics. YoloV5[EB/OL]. Github(2020-10-12)[2021-06-24]. https://github.com/ultralytics/YoloV5 -