<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
  • 《工程索引》(EI)刊源期刊
  • 中文核心期刊
  • 中國科技論文統計源期刊
  • 中國科學引文數據庫來源期刊

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

基于數據融合的智能醫療輔助診斷方法

張桃紅 范素麗 郭徐徐 李倩倩

張桃紅, 范素麗, 郭徐徐, 李倩倩. 基于數據融合的智能醫療輔助診斷方法[J]. 工程科學學報, 2021, 43(9): 1197-1205. doi: 10.13374/j.issn2095-9389.2021.01.12.003
引用本文: 張桃紅, 范素麗, 郭徐徐, 李倩倩. 基于數據融合的智能醫療輔助診斷方法[J]. 工程科學學報, 2021, 43(9): 1197-1205. doi: 10.13374/j.issn2095-9389.2021.01.12.003
ZHANG Tao-hong, FAN Su-li, GUO Xu-xu, LI Qian-qian. Intelligent medical assistant diagnosis method based on data fusion[J]. Chinese Journal of Engineering, 2021, 43(9): 1197-1205. doi: 10.13374/j.issn2095-9389.2021.01.12.003
Citation: ZHANG Tao-hong, FAN Su-li, GUO Xu-xu, LI Qian-qian. Intelligent medical assistant diagnosis method based on data fusion[J]. Chinese Journal of Engineering, 2021, 43(9): 1197-1205. doi: 10.13374/j.issn2095-9389.2021.01.12.003

基于數據融合的智能醫療輔助診斷方法

doi: 10.13374/j.issn2095-9389.2021.01.12.003
基金項目: 中央高校基本科研業務費專項資金資助項目(FRF-GF-20-16B)
詳細信息
    通訊作者:

    E-mail:zth_ustb@163.com

  • 中圖分類號: TG142.71

Intelligent medical assistant diagnosis method based on data fusion

More Information
  • 摘要: 醫生診斷需要結合臨床癥狀、影像檢查等各種數據,基于此,提出了一種可以進行數據融合的醫療輔助診斷方法。將患者的影像信息(如CT圖像)和數值數據(如臨床診斷信息)相結合,利用結合的信息自動預測患者的病情,進而提出了基于深度學習的醫療輔助診斷模型。模型以卷積神經網絡為基礎進行搭建,圖像和數值數據作為輸入,輸出病人的患病情況。該醫療輔助診斷方法能夠利用更加全面的信息,有助于提高自動診斷準確率、降低診斷誤差;另外,僅使用提出的醫療輔助診斷模型就可以一次性處理多種類型的數據,能夠在一定程度上節省診斷時間。在兩個數據集上驗證了所提出方法的有效性,實驗結果表明,該方法是有效的,它可以提高輔助診斷的準確性。

     

  • 圖  1  基于提出的方法構建的模型結構

    Figure  1.  Diagram of the model structure based on the proposed method

    圖  2  基本單元和下采樣單元的結構。(a)基本單元的結構;(b)下采樣單元的結構

    Figure  2.  Structure of the basic unit and down sampling unit:(a) structure of the basic unit; (b) structure of the down sampling unit

    圖  3  訓練過程中的預測準確率和損失的變動。(a)準確率的變動;(b)損失的變動

    Figure  3.  Changes in predictive accuracy and loss during training: (a) changes in accuracy; (b) changes in the loss

    圖  4  COVID數據集中兩種類型的樣本。(a)未患COVID?19的樣本;(b)患有COVID?19的樣本

    Figure  4.  Two types of samples in COVID: (a) samples without COVID?19; (b) samples with COVID?19

    圖  5  訓練過程中的預測準確率和損失的變動。(a)準確率的變動;(b)損失的變動

    Figure  5.  Changes in predictive accuracy and loss during training: (a) changes in accuracy; (b) changes in the loss

    表  1  PHD中四種類型的樣本

    Table  1.   Four types of samples in PHD

    ClassImageAgeSexCPTRBP/kPaSC/(mg·dL?1FBS/(mg·dL?1RERMHR/(times·min?1EIAST/mVSPNVThal
    PH6603202260111402.6002
    PNH541014.72390112612.8113
    NPH650220.72690114800.8202
    NPNH701017.33220010902.4132
    下載: 導出CSV

    表  2  在PHD數據集上僅通過圖像數據學習的預測結果

    Table  2.   Prediction results learned only from image data in PHD dataset

    PredictionLabel
    No pneumoniaPneumoniaAll
    No pneumonia791190
    Pneumonia128092
    All9191182
    下載: 導出CSV

    表  3  在PHD數據集上僅通過結構化的數值數據學習的預測結果

    Table  3.   Prediction results learned only through structured numerical data

    PredictionLabel
    No pneumoniaPneumoniaAll
    No pneumonia721688
    Pneumonia118394
    All8399182
    下載: 導出CSV

    表  4  本文方法在PHD數據集上的預測結果

    Table  4.   Predictive results of proposed method in PHD dataset

    PredictionLabel
    NPNHNPHPNHPHAll
    NPNH33125252
    NPH8290039
    PNH22241038
    PH2393953
    All45463853182
    下載: 導出CSV

    表  5  在PHD數據集上三組實驗的準確率和其他評價指標

    Table  5.   Accuracy and other evaluation indicators of three groups of experiments in PHD dataset

    ModelClassTPFPFNPrecisionRecallF1-scoreAccuracy
    Fusion methodNHPH3319120.6350.7330.6800.687
    NPH2910170.7440.6300.682
    PNH2414140.6320.6320.632
    PH3914140.7360.7360.736
    ShuffleNetv2(Only image data)No pneumonia7911120.8780.8680.8730.874
    Pneumonia8012110.8700.8790.874
    DNN(Only structured data)No heart disease7216110.8180.8670.8420.852
    Heart disease8311160.8830.8380.860
    下載: 導出CSV

    表  6  在COVID數據集上僅通過圖像數據學習的預測結果

    Table  6.   Prediction results learned only from image data in COVID dataset

    PredictionLabel
    NonCOVIDCOVIDAll
    NonCOVID552075
    COVID144963
    All6969138
    下載: 導出CSV

    表  7  在COVID數據集上僅通過結構化的數值數據學習的預測結果

    Table  7.   Prediction results learned only by structured numerical data in COVID dataset

    PredictionLabel
    NonCOVIDCOVIDAll
    NonCOVID53053
    COVID166985
    All6969138
    下載: 導出CSV

    表  8  本文方法在COVID數據集上的預測結果

    Table  8.   Predictive results of proposed method in COVID dataset

    PredictionLabel
    NonCOVIDCOVIDAll
    NonCOVID65469
    COVID46569
    All6969138
    下載: 導出CSV

    表  9  在COVID數據集上三組實驗的準確度和其他評價指標

    Table  9.   Accuracy and other evaluation indicators of three groups of experiments in COVID dataset

    ModelClassTPFPFNPrecisionRecallF1-scoreAccuracy
    Fusion methodNonCOVID65440.9420.9420.9420.942
    COVID65440.9420.9420.942
    ShuffleNetv2(Only image data)NonCOVID5520140.7330.7970.7640.754
    COVID4914200.7780.7100.742
    DNN(Only structured data)NonCOVID530161.000.7680.8690.884
    COVID691600.8121.000.896
    下載: 導出CSV

    表  10  本文方法和僅通過圖像學習對138個樣本進行分類的時間

    Table  10.   Time required to classify 138 samples using proposed method and using only image data

    ModelProposed methodImage only
    Time3.583.56
    下載: 導出CSV

    表  11  Fusion method、ResNet50、VGG16、ShuffleNetv2和AlexNet的準確度和其他評價指標

    Table  11.   Accuracy and other evaluation indicators of Fusion method, ResNet50, VGG16, ShuffleNetv2 and AlexNet

    ModelClassTPFPFNPrecisionRecallF1-scoreAccuracy
    Fusion methodNonCOVID65440.9420.9420.9420.942
    COVID65440.9420.9420.942
    ResNet50NonCOVID5615130.7890.8120.8000.797
    COVID5413150.8060.7830.794
    VGG16NonCOVID5416150.7710.7830.7770.775
    COVID5315160.7790.7680.774
    ShuffleNetv2NonCOVID5520140.7330.7970.7640.754
    COVID4914200.7780.7100.742
    AlexNetNonCOVID5018190.7350.7250.7300.732
    COVID5119180.7280.7390.734
    下載: 導出CSV
    <span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    <span id="fpn9h"><noframes id="fpn9h">
    <th id="fpn9h"></th>
    <strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
    <th id="fpn9h"><noframes id="fpn9h">
    <span id="fpn9h"><video id="fpn9h"></video></span>
    <ruby id="fpn9h"></ruby>
    <strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    www.77susu.com
  • [1] Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE, 1998, 86(11): 2278 doi: 10.1109/5.726791
    [2] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Commun ACM, 2017, 60(6): 84 doi: 10.1145/3065386
    [3] Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database // 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, 2009: 248
    [4] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition [J/OL]. ArXiv Preprint (2014-09-04) [2021-01-12]. https://arxiv.org/abs/1409.1556
    [5] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, 2016: 770
    [6] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions // 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, 2015: 1
    [7] Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift [J/OL]. ArXiv Preprint (2015-02-11) [2021-01-12]. https://arxiv.org/abs/1502.03167
    [8] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, 2016: 2818
    [9] Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning [J/OL]. ArXiv Preprint (2016-02-24) [2021-01-12]. https://arxiv.org/abs/1602.07261
    [10] Iandola F N, Han S, Moskewicz M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size [J/OL]. ArXiv Preprint (2016-02-24) [2021-01-12]. https://arxiv.org/abs/1602.07360
    [11] Howard A G, Zhu M, Chen B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications [J/OL]. ArXiv Preprint (2016-02-24) [2021-01-12]. https://arxiv.org/abs/1704.04861v1
    [12] Zhang X Y, Zhou X Y, Lin M X, et al. ShuffleNet: an extremely efficient convolutional neural network for mobile devices//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 6848
    [13] Sandler M, Howard A, Zhu M L, et al. MobileNetV2: inverted residuals and linear bottlenecks// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 4510
    [14] Ma N N, Zhang X Y, Zheng H T, et al. ShuffleNet V2: practical guidelines for efficient CNN architecture design // 2018 European Conference on Computer Vision (ECCV). Munich, 2018: 122
    [15] Howard A, Sandler M, Chen B, et al. Searching for MobileNetV3//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, 2019: 1314
    [16] Li L, Xu M, Liu H R, et al. A large-scale database and a CNN model for attention-based glaucoma detection. IEEE Trans Med Imaging, 2020, 39(2): 413 doi: 10.1109/TMI.2019.2927226
    [17] Yang H, Kim J Y, Kim H, et al. Guided soft attention network for classification of breast cancer histopathology images. IEEE Trans Med Imaging, 2020, 39(5): 1306 doi: 10.1109/TMI.2019.2948026
    [18] Xu X Y, Wang C D, Guo J X, et al. MSCS-DeepLN: Evaluating lung nodule malignancy using multi-scale cost-sensitive neural networks. Med Image Anal, 2020, 65: 101772 doi: 10.1016/j.media.2020.101772
    [19] Mobiny A, Lu H Y, Nguyen H V, et al. Automated classification of apoptosis in phase contrast microscopy using capsule network. IEEE Trans Med Imaging, 2020, 39(1): 1 doi: 10.1109/TMI.2019.2918181
    [20] Zhou Y, Li G Q, Li H Q. Automatic cataract classification using deep neural network with discrete state transition. IEEE Trans Med Imaging, 2020, 39(2): 436 doi: 10.1109/TMI.2019.2928229
    [21] Wang Y, Wang N, Xu M, et al. Deeply-supervised networks with threshold loss for cancer detection in automated breast ultrasound. IEEE Trans Med Imaging, 2020, 39(4): 866 doi: 10.1109/TMI.2019.2936500
    [22] Liu T J, Guo Q Q, Lian C F, et al. Automated detection and classification of thyroid nodules in ultrasound images using clinical-knowledge-guided convolutional neural networks. Med Image Anal, 2019, 58: 101555 doi: 10.1016/j.media.2019.101555
    [23] Yao C, Zhao J Z, Ma B Y, et al. Fast detection method for cervical cancer abnormal cells based on deep learning. Chin J Eng, https://doi.org/10.13374/j.issn2095-9389.2021.01.12.001

    姚超, 趙基淮, 馬博淵, 等. 基于深度學習的宮頸癌異常細胞快速檢測方法. 工程科學學報, https://doi.org/10.13374/j.issn2095-9389.2021.01.12.001
    [24] Zeng Y W, Liu X K, Xiao N, et al. Automatic diagnosis based on spatial information fusion feature for intracranial aneurysm. IEEE Trans Med Imaging, 2020, 39(5): 1448 doi: 10.1109/TMI.2019.2951439
    [25] Wang L T, Zhang L, Zhu M J, et al. Automatic diagnosis for thyroid nodules in ultrasound images by deep neural networks. Med Image Anal, 2020, 61: 101665 doi: 10.1016/j.media.2020.101665
    [26] Kumar A, Fulham M, Feng D G, et al. Co-learning feature fusion maps from PET?CT images of lung cancer. IEEE Trans Med Imaging, 2020, 39(1): 204 doi: 10.1109/TMI.2019.2923601
    [27] Joyseeree R, Otálora S, Müller H, et al. Fusing learned representations from Riesz filters and deep CNN for lung tissue classification. Med Image Anal, 2019, 56: 172 doi: 10.1016/j.media.2019.06.006
    [28] Kingma D, Ba J. Adam: A method for stochastic optimization[J/OL]. ArXiv Preprint (2014-12-22) [2021-01-12]. https://arxiv.org/abs/1412.6980
    [29] Bio. Heart Disease UCI [J/OL ]. Kaggle (2018-06-25) [2021-01-12]. https://www.kaggle.com/ronitf/heart-disease-uci
    [30] Società Italiana di Radiologia Medica e Interventistica. Covid−19 Database[J/OL]. Database Online (2020-03-18) [2021-01-12]. https://www.sirm.org/category/senza-categoria/covid-19
    [31] Zhao J, Zhang Y, He X, et al. COVID−CT−Dataset: A CT scan dataset about COVID−19[J/OL]. ArXiv Preprint (2020-03-30) [2021-01-12]. https://github.com/UCSD-AI4H/COVID-CT
  • 加載中
圖(5) / 表(11)
計量
  • 文章訪問數:  860
  • HTML全文瀏覽量:  477
  • PDF下載量:  83
  • 被引次數: 0
出版歷程
  • 收稿日期:  2021-01-12
  • 網絡出版日期:  2021-03-01
  • 刊出日期:  2021-09-18

目錄

    /

    返回文章
    返回