-
摘要: 睡眠分期是評價睡眠質量的必要基礎,現階段的工作大部分采用全監督學習和單一維度視圖信息進行,這不僅需要技師進行大量的睡眠數據標注,還可能因特征提取不充分而導致分期準確率受限的問題。利用半監督學習策略,實現對腦電無標注數據的學習。提出一種多視圖混合神經網絡,首先用多通道視圖時頻域機制分別提取時域信號特征和空域信號特征,實現多視圖特征提取;再通過注意力機制加強對顯著性特征的提取;最后將上述混合特征融合并分類。在三個公開數據集和一個私有數據集中與全監督學習進行了對比評估,半監督學習取得平均準確率為81.0%,卡帕值為73.2%。結果表明,本文模型可以與全監督學習的睡眠分期模型相媲美,同時顯著減少技師標注數據的工作量。Abstract: Sleep takes approximately 1/3 of a person’s lifetime; therefore, its quality profoundly affects learning, physical recovery, and metabolism. Clinically relevant human physiological data are collected using polysomnography, which is analyzed by sleep technologists to determine sleep stages. However, the manual method is prone to having a cumbersome workload due to a large amount of data analysis and different data formats. Simultaneously, manually analyzed results are influenced by doctors’ medical clinical experience, which may cause inconsistent diagnoses. Recently, with the development of artificial intelligence, computer science, other technologies, and their interdisciplinarity, a series of typical achievements have been accomplished in intelligent diagnosis, laying the foundation for medical artificial intelligence in the sleep medicine field. In sleep research, realizing automatic sleep signal analysis and recognition assists doctors in diagnosis and reduces their workload, thus having important clinical significance and application value. Although deep neural networks are becoming popular for automatic sleep stage classification with supervised learning, large-scale, labeled datasets remain difficult to acquire. Learning from raw polysomnography signals and derived time-frequency image representations has been an interesting solution. However, extracting features from only a single dimension leads to inadequate feature extraction and, thus, limited accuracy. Hence, this paper aims to learn multi-view representations for physiological signals with semi-supervised learning. Specifically, we make the following contributions: (1) We propose a multi-view, hybrid neural network model containing a multichannel view time-frequency domain feature extraction mechanism, an attention mechanism, and a feature fusion module. Among these aspects, the multichannel view time-frequency domain mechanism extracts time domain and frequency domain signal features to achieve multi-view feature extraction. The attention mechanism module enhances salience features and achieves interclass feature extraction in the frequency domain. The feature fusion module fuses and classifies the above features. (2) A semi-supervised learning strategy is used to learn unlabeled electroencephalogram (EEG) data, which solves the problem of sleep data underutilization due to insufficient labeling of EEG signals in clinical practice. (3) Extensive experiments conducted on sleep stage classification demonstrate state-of-the-art performance compared with supervised learning and a semi-supervised baseline. Experimental results on three public databases (Sleep?EDF, DOD?H, and DOD?O) and one private database show that our semi-supervised method achieves accuracies of 81.6%, 81.5%, 79.2%, and 75.4%. The results show that our proposed model is comparable to a fully supervised sleep staging model while substantially reducing the technician’s workload in data labeling.
-
圖 6 多視圖混合神經網絡混淆矩陣. (a) S: Sleep?EDF混淆矩陣; (b) S: DOD?H混淆矩陣; (c) S: DOD?O混淆矩陣; (d) S: 私有數據集混淆矩陣; (e) SS: Sleep?EDF混淆矩陣; (f) SS: DOD?H混淆矩陣; (g) SS: DOD?O混淆矩陣; (h) SS: 私有數據集混淆矩陣
Figure 6. Confusion matrices of the multi-view hybrid neural network: (a) S: confusion matrix of supervised learning on Sleep?EDF datasets; (b) S: confusion matrix of supervised learning on DOD?H datasets; (c) S: confusion matrix of supervised learning on DOD?O datasets; (d) S: confusion matrix of supervised learning on private datasets; (e) SS: confusion matrix of semi-supervised learning on Sleep?EDF datasets; (f) SS: confusion matrix of semi-supervised learning on DOD?H datasets; (g) SS: confusion matrix of semi-supervised learning on DOD?O datasets; (h) SS: confusion matrix of semi-supervised learning on private datasets
表 1 睡眠數據集情況
Table 1. Summary of the sleep databases
Datasets Size Sampling rate EEG channel Score Epoch Total W N1 N2 N3 REM Sleep?EDF 40 100 Fpz?Cz R&K 4423 3653 19851 6415 8349 42691 DOD?H 25 250 F3_F4 AASM 3037 1505 11879 3514 4727 24665 DOD?O 55 250 F3_F4 AASM 10660 2898 26650 5683 8306 54197 Private 9 128 F3-M2 AASM 1042 956 3938 1557 1457 8950 表 2 Sleep?EDF數據集實驗結果
Table 2. Experimental results of the Sleep?EDF dataset
System Overall Metrics Class-wise F1/% ACC/% K F1/% Sens./% Spec./% W N1 N2 N3 REM S:Ours 83.6 75.3 79.4 76.6 87.9 86.5 45.3 88.3 83.2 81.2 SS:Ours 81.6 71.5 74.8 74.7 86.3 84.3 35.3 87.2 80.4 80.3 S:XSleepNet2 83.9 77.1 78.7 78.6 95.5 S:XSleepNet1 82.2 75.0 76.4 76.8 95.2 S:SleepStageNet 67.8 59.1 68.5 S:SeqSleepNet 82.2 74.6 74.1 73.9 95.0 89.5 44.3 84.2 75.3 77.5 S:DeepSleepNet 79.8 72.0 73.1 88.1 37.0 82.7 77.3 80.3 S:HATSN 80.1 75.2 79.9 SS:SleepPriorCL 76.4 65.5 SS:SleepDPC 76.4 62.8 表 3 DOD?H數據集實驗結果
Table 3. Experimental results of the DOD?H dataset
System Overall Metrics Class-wise F1/% ACC/% K F1/% Sens./% Spec./% W N1 N2 N3 REM S:Ours 82.4 73.5 76.2 79.3 80.2 75.1 50.2 87.3 87.2 72.5 SS:Ours 79.2 67.0 74.4 75.2 86.9 77.3 48.0 84.7 87.3 78.4 S:SeqSleepNet 82.2 74.6 74.1 73.9 95.0 89.5 44.3 84.2 75.3 77.5 S:DeepSleepNet 79.8 68.2 79.8 78.6 37.0 82.7 77.3 80.3 S:Tsinalis et al.(CNN) 83.6 83.5 83.5 82.6 38.8 87.6 62.0 74.4 S:Mixed NN 82.1 72.5 81.9 84.8 49.7 85.6 61.8 71.8 表 4 DOD?O數據集實驗結果
Table 4. Experimental results of the DOD?O dataset
System Overall Metrics Class-wise F1/% ACC/% K F1/% Sens./% Spec./% W N1 N2 N3 REM S:Ours 82.4 73.5 76.2 79.3 80.2 75.1 50.2 87.3 87.2 72.5 SS:Ours 79.2 67.0 74.4 75.2 86.9 77.3 48.0 84.7 87.3 78.4 S:SeqSleepNet 82.2 74.6 74.1 73.9 95.0 89.5 44.3 84.2 75.3 77.5 S:DeepSleepNet 79.8 68.2 79.8 78.6 37.0 82.7 77.3 80.3 S:Tsinalis et al.(CNN) 83.6 83.5 83.5 82.6 38.8 87.6 62.0 74.4 S:Mixed NN 82.1 72.5 81.9 84.8 49.7 85.6 61.8 71.8 表 5 私有數據集實驗結果
Table 5. Experimental results of the private dataset
System Overall Metrics Class-wise F1/% ACC/% K F1/% Sens. /% Spec. /% W N1 N2 N3 REM S:Ours 77.2 70.3 76.3 75.7 85.4 85.4 58.0 78.3 88.9 70.8 SS:Ours 75.4 66.3 74.2 72.9 83.0 81.4 54.8 76.3 88.2 67.8 S:resnet-50 77.5 70.4 77.3 76.4 78.6 86.3 58.5 76.7 88.2 69.2 S:resnext-50 78.1 71.7 76.8 76.7 77.1 86.2 58.8 78.5 88.7 74.5 SS:resnet-50 73.2 65.1 72.7 72.9 73.5 84.2 54.5 76.6 90.1 63.9 SS:resnext-50 71.7 62.1 69.6 69.3 71.7 81.5 40.3 73.8 86.4 67.7 表 6 私有數據集消融實驗結果
Table 6. Ablation experimental results of the private dataset
System Overall Metrics Class-wise F1/% ACC/% K F1/% Sens. /% Spec. /% W N1 N2 N3 REM S:Ours without attention 72.3 63.5 71.7 71.9 71.7 83.2 53.6 71.4 84.8 67.7 SS: Ours without attention 70.5 60.8 68.9 69.7 68.9 80.3 48.9 70.2 85.3 63.5 S:Ours 77.2 70.3 76.3 75.7 85.4 85.4 58.0 78.3 88.9 70.8 SS:Ours 75.4 66.3 74.2 72.9 83.0 81.4 44.8 76.3 88.2 67.8 www.77susu.com -
參考文獻
[1] Maquet P. The role of sleep in learning and memory. Science, 2001, 294(5544): 1048 doi: 10.1126/science.1062856 [2] Berry R B, Brooks R, Gamaldo C, et al. AASM scoring manual updates for 2017 (version 2.4). J Clin Sleep Med, 2017, 13(5): 665 doi: 10.5664/jcsm.6576 [3] Zhang Y. Sensing, computing and intervention for sleep health. Chin Sci Bull, 2022, 67(1): 27張遠. 面向睡眠健康的感知、計算和干預. 科學通報, 2022, 67(1):27 [4] Phan H, Andreotti F, Cooray N, et al. Joint classification and prediction CNN framework for automatic sleep stage classification. IEEE Trans Biomed Eng, 2019, 66(5): 1285 doi: 10.1109/TBME.2018.2872652 [5] Chambon S, Galtier M N, Arnal P J, et al. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Trans Neural Syst Rehabil Eng, 2018, 26(4): 758 doi: 10.1109/TNSRE.2018.2813138 [6] Perslev M, Jensen M H, Darkner S, et al. U-time: A fully convolutional network for time series segmentation applied to sleep staging // Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, 2019: 1 [7] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition // IEEE Conference on Computer Vision and Pattern Recognition. New York, 2016: 770 [8] Zhang H, Wu C, Zhang Z, et al. Resnest: Split-attention networks[J/OL]. arXiv preprint (2020-04-19) [2022-03-22].https://arxiv.org/abs/2004.08955 [9] Phan H, Andreotti F, Cooray N, et al. Automatic sleep stage classification using single-channel EEG: Learning sequential features with attention-based recurrent neural networks // 40th Annual International Conference IEEE Engineering in Medicine and Biology Society, Honolulu, 2018: 1452 [10] Phan H, Andreotti F, Cooray N, et al. SeqSleepNet: End-to-end hierarchical recurrent neural network for sequence-to- sequence automatic sleep staging. IEEE Trans Neural Syst Rehabil Eng, 2019, 27(3): 400 doi: 10.1109/TNSRE.2019.2896659 [11] Phan H, Chén O Y, Koch P, et al. Towards more accurate automatic sleep staging via deep transfer learning. IEEE Trans Biomed Eng, 2021, 68(6): 1787 doi: 10.1109/TBME.2020.3020381 [12] Supratak A, Dong H, Wu C, et al. DeepSleepNet: A model for automatic sleep stage scoring based on raw single-channel EEG. IEEE Trans Neural Sys Rehabil Eng, 2017, 25(11): 1998 doi: 10.1109/TNSRE.2017.2721116 [13] Guillot A, Thorey V. Robust, et al. SleepNet: Transfer learning for automated sleep staging at scale. IEEE Trans Neural Syst Rehabil Eng, 2021, 29: 1441 [14] Chen K, Zhang C, Ma J, et al. Sleep staging from single-channel EEG with multi-scale feature and contextual information. Sleep Breath, 2019, 23(4): 1159 doi: 10.1007/s11325-019-01789-4 [15] Jia Z Y, Lin Y F, Wang J, et al. SalientSleepNet: Multimodal salient wave detection network for sleep staging // Thirtieth International Joint Conference on Artificial Intelligence. Montreal, 2021: 2614 [16] Jin Z, Jia K B, Yuan Y. A hybrid attention temporal sequential network for sleep stage classification. J Biomed Eng, 2021, 38(2): 241 doi: 10.7507/1001-5515.202008006金崢, 賈克斌, 袁野. 基于混合注意力時序網絡的睡眠分期算法研究. 生物醫學工程學雜志, 2021, 38(2):241 doi: 10.7507/1001-5515.202008006 [17] Phan H, Chén O Y, Tran M C, et al. XSleepNet: Multi-view sequential model for automatic sleep staging. IEEE Trans Pattern Anal Mach Intell, 2022, 44(9): 5903 [18] Fiorillo L, Favaro P, Faraci F D. DeepSleepNet-lite: A simplified automatic sleep stage scoring model with uncertainty estimates. IEEE Trans Neural Syst Rehabil Eng, 2021, 29: 2076 doi: 10.1109/TNSRE.2021.3117970 [19] Dong H, Supratak A, Pan W, et al. Mixed neural network approach for temporal sleep stage classification. IEEE Trans Neural Syst Rehabil Eng, 2018, 26(2): 324 doi: 10.1109/TNSRE.2017.2733220 [20] Tsinalis O, Matthews P M, Guo Y K, et al. Automatic sleep stage scoring with single-channel EEG using convolutional neural networks[J/OL]. arXiv preprint (2016-10-05) [2022-03-22]. https://arxiv.org/abs/1610.01683 [21] Zhang H J, Wang J, Xiao Q F, et al. SleepPriorCL: Contrastive representation learning with prior knowledge-based positive mining and adaptive temperature for sleep staging[J/OL]. arXiv preprint (2021-10-15) [2022-3-22].https://arxiv.org/abs/2110.09966 [22] Xiao Q, Wang J, Ye J, et al. Self-supervised learning for sleep stage classification with predictive and discriminative contrastive coding // 2021 IEEE International Conference on Acoustics Speech and Signal Processing. Toronto, 2021: 1290 [23] Kemp B, Zwinderman A H, Tuk B, et al. Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the EEG. IEEE Trans Biomed Eng, 2000, 47(9): 1185 doi: 10.1109/10.867928 [24] Guillot A, Sauvet F, During E H, et al. Dreem open datasets: Multi-scored sleep datasets to compare human and automated sleep staging. IEEE Trans Neural Syst Rehabil Eng, 2020, 28(9): 1955 doi: 10.1109/TNSRE.2020.3011181 [25] Sohn K, Berthelot D, Li C, et al. FixMatch: Simplifying semi-supervised learning with consistency and confidence. Adv Neural inf Process Syst, 2020, 33: 596 [26] Korkalainen H, Aakko J, Nikkonen S, et al. Accurate deep learning-based sleep staging in a clinical population with suspected obstructive sleep apnea. IEEE J Biomed Health Inform, 2020, 24(7): 2073 [27] Cui Y, Jia M L, Lin T Y, et al. Class-balanced loss based on effective number of samples // 2019 IEEE/CVF Conf Comput Vis Pattern Recognit. Long Beach, 2019: 9260 [28] Zhang C H, Yu W W, Li Y M, et al. CMS2-net: Semi-supervised sleep staging for diverse obstructive sleep apnea severity. IEEE J Biomed Health Inform, 2022, 26(7): 3447 doi: 10.1109/JBHI.2022.3156585 [29] Wu Z R, Xiong Y J, Yu S X, et al. Unsupervised feature learning via non-parametric instance discrimination // 2018 IEEE/CVF Conf Comput Vis Pattern Recognit. Salt Lake City, 2018: 3733 -