<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
  • 《工程索引》(EI)刊源期刊
  • 中文核心期刊
  • 中國科技論文統計源期刊
  • 中國科學引文數據庫來源期刊

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

自注意力指導的多序列融合肝細胞癌分化判別模型

賈熹濱 孫政 楊大為 楊正漢

賈熹濱, 孫政, 楊大為, 楊正漢. 自注意力指導的多序列融合肝細胞癌分化判別模型[J]. 工程科學學報, 2021, 43(9): 1149-1156. doi: 10.13374/j.issn2095-9389.2021.01.13.003
引用本文: 賈熹濱, 孫政, 楊大為, 楊正漢. 自注意力指導的多序列融合肝細胞癌分化判別模型[J]. 工程科學學報, 2021, 43(9): 1149-1156. doi: 10.13374/j.issn2095-9389.2021.01.13.003
JIA Xi-bin, SUN Zheng, YANG Da-wei, YANG Zheng-han. Self-attention guided multi-sequence fusion model for differentiation of hepatocellular carcinoma[J]. Chinese Journal of Engineering, 2021, 43(9): 1149-1156. doi: 10.13374/j.issn2095-9389.2021.01.13.003
Citation: JIA Xi-bin, SUN Zheng, YANG Da-wei, YANG Zheng-han. Self-attention guided multi-sequence fusion model for differentiation of hepatocellular carcinoma[J]. Chinese Journal of Engineering, 2021, 43(9): 1149-1156. doi: 10.13374/j.issn2095-9389.2021.01.13.003

自注意力指導的多序列融合肝細胞癌分化判別模型

doi: 10.13374/j.issn2095-9389.2021.01.13.003
基金項目: 國家自然科學基金資助項目(61871276,U19B2039)
詳細信息
    通訊作者:

    E-mail:yangzhenghan@vip.163.com

  • 中圖分類號: TP183

Self-attention guided multi-sequence fusion model for differentiation of hepatocellular carcinoma

More Information
  • 摘要: 結合影像學和人工智能技術對病灶進行無創性定量分析是目前智慧醫療的一個重要研究方向。針對肝細胞癌(Hepatocellular carcinoma, HCC)分化程度的無創性定量估測方法研究,結合放射科醫師的臨床讀片經驗,提出了一種基于自注意力指導的多序列融合肝細胞癌組織學分化程度無創判別計算模型。以動態對比增強核磁共振成像(Dynamic contrast-enhanced magnetic resonance imaging, DCE-MRI)的多個序列為輸入,學習各時序序列及各序列的多層掃描切片在分化程度判別任務的權重,加權序列中具有的良好判別性能的時間和空間特征,以提升分化程度判別性能。模型的訓練和測試在三甲醫院的臨床數據集上進行,實驗結果表明,本文所提出的肝細胞癌分化程度判別模型取得相比幾種基準和主流模型最高的分類計算性能,在WHO組織學分級任務中,判別準確度、靈敏度、精確度分別達到80%,82%和82%。

     

  • 圖  1  “自注意力”模型結構

    Figure  1.  Structure of the "self-attention" model

    圖  2  5個增強序列的2D影像與3D建模、2D原始數據及數據增強結果展示

    Figure  2.  Five enhanced sequences of 3D reconstruction, 2D raw data, and the corresponding data augmentation results

    圖  3  HCC三分類和四分類任務中“自注意力”模型的embedding space和混淆矩陣。(a)三分類任務訓練前的特征空間;(b)三分類任務訓練后的特征空間;(c)三分類任務的混淆矩陣;(d)四分類任務訓練前的特征空間;(e)四分類任務訓練后的特征空間;(f)四分類任務的混淆矩陣

    Figure  3.  Feature distributions at the embedding space before and after training and the corresponding confusion matrix of the WHO and Edmonson classification tasks: (a) feature space of the model in three classification tasks before training; (b) feature space of the model in three classification tasks after training; (c) confusion matrix in three classification tasks; (d) feature space of the model in four classification tasks before training; (e) feature space of the model in four classification tasks after training; and (f) confusion matrix in four classification tasks

    表  1  基于WHO分類標準的HCC類別數據分布

    Table  1.   Augmentation results for a dataset with HCC grading under the WHO grading system

    DatasetsWellModeratelyPoorly
    Training set5620854
    Test set3210426
    Total8831280
    下載: 導出CSV

    表  2  基于Edmonson分類標準的HCC類別數據分布

    Table  2.   Augmentation results for a dataset with HCC grading under the Edmonson grading system

    DatasetsIIIIIIIV
    Training set568812054
    Test set32406426
    Total8812818480
    下載: 導出CSV

    表  3  基于WHO分類標準的對比實驗

    Table  3.   Detailed comparison of experimental results on the test set under the WHO grading standard

    ModelAccuracyRecallPrecisionF1-score
    Our method0.8021±0.04780.8231±0.04040.8215±0.05370.8221±0.0477
    MCF-3DCNN[15]0.7188±0.04050.6667±0.01950.7874±0.04160.7014±0.0337
    3D ResNet[22]0.7312±0.06270.7353±0.05570.7762±0.04430.7613±0.0537
    3D SE-ResNet[23]0.7453±0.06750.7342±0.07550.7627±0.07320.7665±0.0675
    3D SE-DenseNet[24]0.7854±0.04450.7923±0.06380.8117±0.04170.7913±0.0576
    下載: 導出CSV

    表  4  基于Edmonson分類標準的對比實驗

    Table  4.   Detailed comparison of experimental results on the test set under the Edmonson grading standard

    ModelAccuracyRecallPrecisionF1-score
    Our method0.7734±0.03180.7889±0.04120.8089±0.04160.7896±0.0225
    MCF-3DCNN[15]0.6322±0.05220.5482±0.13380.6424±0.06570.6431±0.0824
    3D ResNet[22]0.7037±0.07310.7229±0.04420.7404±0.04210.7203±0.0336
    3D SE-ResNet[23]0.7108±0.06440.7492±0.05310.7637±0.07880.7566±0.0631
    3D SE-DenseNet[24]0.7227±0.03410.7762±0.04260.7738±0.04460.7876±0.0512
    下載: 導出CSV
    <span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    <span id="fpn9h"><noframes id="fpn9h">
    <th id="fpn9h"></th>
    <strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
    <th id="fpn9h"><noframes id="fpn9h">
    <span id="fpn9h"><video id="fpn9h"></video></span>
    <ruby id="fpn9h"></ruby>
    <strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    www.77susu.com
  • [1] Yang J D, Hainaut P, Gores G J, et al. A global view of hepatocellular carcinoma: trends, risk, prevention and management. Nat Rev Gastroenterol Hepatol, 2019, 16(10): 589 doi: 10.1038/s41575-019-0186-y
    [2] Jiang Y, Sun A H, Zhao Y, et al. Proteomics identifies new therapeutic targets of early-stage hepatocellular carcinoma. Nature, 2019, 567(7747): 257 doi: 10.1038/s41586-019-0987-8
    [3] Lin H X, Wei C, Wang G X, et al. Automated classification of hepatocellular carcinoma differentiation using multiphoton microscopy and deep learning. J Biophoto, 2019, 12(7): e201800435
    [4] Jimenez H, Wang M H, Zimmerman J W, et al. Tumour-specific amplitude-modulated radiofrequency electromagnetic fields induce differentiation of hepatocellular carcinoma via targeting Cav3.2 T-type voltage-gated calcium channels and Ca2+ influx. EBioMedicine, 2019, 44: 209 doi: 10.1016/j.ebiom.2019.05.034
    [5] Shioga T, Kondo R, Ogasawara S, et al. Usefulness of tumor tissue biopsy for predicting the biological behavior of hepatocellular carcinoma. Anticancer Res, 2020, 40(7): 4105 doi: 10.21873/anticanres.14409
    [6] Parr R L, Mills J, Harbottle A, et al. Mitochondria, prostate cancer, and biopsy sampling error. Discov Med, 2013, 25;15(83): 213
    [7] Henken K, Van Gerwen D, Dankelman J, et al. Accuracy of needle position measurements using fiber Bragg gratings. Minim Invasive Ther Allied Technol, 2012, 21(6): 408 doi: 10.3109/13645706.2012.666251
    [8] Li J Z, Xue F, Xu X H, et al. Dynamic contrast enhanced MRI differentiates hepatocellular carcinoma from hepatic metastasis of rectal cancer by extracting pharmacokinetic parameters and radiomic features. Exp Ther Med, 2020, 20(4): 3643
    [9] Kaissis G A, Loh?fer F K, H?rl M, et al. Combined DCE-MRI- and FDG-PET enable histopathological grading prediction in a rat model of hepatocellular carcinoma. Eur J Radiol, 2020, 124: 108848 doi: 10.1016/j.ejrad.2020.108848
    [10] Khalifa F, Soliman A, El-Baz A, et al. Models and methods for analyzing DCE-MRI: A review. Med Phys, 2014, 41(12): 124301 doi: 10.1118/1.4898202
    [11] Yang D W, Jia X B, Xiao Y J, et al. Noninvasive evaluation of the pathologic grade of hepatocellular carcinoma using MCF-3DCNN: A pilot study. Biomed Res Int, 2019: 9783106
    [12] Chernyak V, Fowler K J, Kamaya A, et al. Liver imaging reporting and data system (LI-RADS) version 2018: Imaging of hepatocellular carcinoma in at-risk patients. Radiology, 2018, 289(3): 816 doi: 10.1148/radiol.2018181494
    [13] Suk H I, Lee S W, Shen D G. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. NeuroImage, 2014, 101: 569 doi: 10.1016/j.neuroimage.2014.06.077
    [14] Wang Q Y, Que D S. Staging of hepatocellular carcinoma using deep feature in contrast-enhanced MR images//2nd International Conference on Computer Engineering, Information Science & Application Technology (ICCIA 2017). Wuhan, 2016: 186
    [15] Jia X B, Xiao Y J, Yang D W, et al. Temporal-spatial feature learning of dynamic contrast enhanced-MR images via 3D convolutional neural networks//Chinese Conference on Image and Graphics Technologies. Singapore, 2018: 380
    [16] Jia X B, Xiao Y J, Yang D W, et al. Multi-parametric MRIs based assessment of Hepatocellular Carcinoma Differentiation with Multi-scale ResNet. TIIS, 2019, 13(10): 5179
    [17] Antropova N, Huynh B Q, Giger M L. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med Phys, 2017, 44(10): 5162 doi: 10.1002/mp.12453
    [18] Hu Q Y, Whitney H M, Giger M L. A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI. Sci Rep, 2020, 10: 10536 doi: 10.1038/s41598-020-67441-4
    [19] Zhang T H, Fan S L, Guo X X, et al. Intelligent medical assistant diagnosis method based on data fusion. Chin J Eng, doi: 10.13374/j.issn2095-9389.2021.01.12.003

    張桃紅, 范素麗, 郭徐徐, 等. 基于數據融合的智能醫療輔助診斷方法. 工程科學學報, doi: 10.13374/j.issn2095-9389.2021.01.12.003
    [20] Ye H, Chen Q J, Wu H M, et al. Classification of liver cancer images based on deep learning//International conference on Data Science, Medicine and Bioinformatics. Singapore, 2020: 184
    [21] Zhou L, Rui J G, Zhou W X, et al. Edmondson-Steiner grade: A crucial predictor of recurrence and survival in hepatocellular carcinoma without microvascular invasio. Pathol Res Pract, 2017, 213(7): 824 doi: 10.1016/j.prp.2017.03.002
    [22] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, 2016: 770
    [23] Hu J, Shen L, Sun G. Squeeze-and-excitation networks//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7132
    [24] Zhou Q, Zhou Z Y, Chen C M, et al. Grading of hepatocellular carcinoma using 3D SE-DenseNet in dynamic enhanced MR images. Comput Biol Med, 2019, 107: 47 doi: 10.1016/j.compbiomed.2019.01.026
    [25] Yoshinobu Y, Iwamoto Y, Xianhua H A N, et al. Deep learning method for content-based retrieval of focal liver lesions using multiphase contrast-enhanced computer tomography images//2020 IEEE International Conference on Consumer Electronics (ICCE). Las Vegas, 2020: 1
  • 加載中
圖(3) / 表(4)
計量
  • 文章訪問數:  780
  • HTML全文瀏覽量:  460
  • PDF下載量:  60
  • 被引次數: 0
出版歷程
  • 收稿日期:  2021-01-13
  • 網絡出版日期:  2021-03-20
  • 刊出日期:  2021-09-18

目錄

    /

    返回文章
    返回