<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
  • 《工程索引》(EI)刊源期刊
  • 中文核心期刊
  • 中國科技論文統計源期刊
  • 中國科學引文數據庫來源期刊

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

基于參數懲罰和經驗回放的材料吸聲系數回歸增量學習

王弘業 錢權 武星

王弘業, 錢權, 武星. 基于參數懲罰和經驗回放的材料吸聲系數回歸增量學習[J]. 工程科學學報, 2023, 45(7): 1225-1231. doi: 10.13374/j.issn2095-9389.2022.05.03.006
引用本文: 王弘業, 錢權, 武星. 基于參數懲罰和經驗回放的材料吸聲系數回歸增量學習[J]. 工程科學學報, 2023, 45(7): 1225-1231. doi: 10.13374/j.issn2095-9389.2022.05.03.006
WANG Hong-ye, QIAN Quan, WU Xing. Incremental learning of material absorption coefficient regression based on parameter penalty and experience replay[J]. Chinese Journal of Engineering, 2023, 45(7): 1225-1231. doi: 10.13374/j.issn2095-9389.2022.05.03.006
Citation: WANG Hong-ye, QIAN Quan, WU Xing. Incremental learning of material absorption coefficient regression based on parameter penalty and experience replay[J]. Chinese Journal of Engineering, 2023, 45(7): 1225-1231. doi: 10.13374/j.issn2095-9389.2022.05.03.006

基于參數懲罰和經驗回放的材料吸聲系數回歸增量學習

doi: 10.13374/j.issn2095-9389.2022.05.03.006
基金項目: 國家重點研發計劃資助項目(2022YFB3707800);云南省重大科技專項(202102AB080019-3,202002AB080001-2);之江實驗室科研攻關項目(2021PE0AC02);上海張江國家自主創新示范區專項發展資金重大項目(ZJ2021-ZD-006)
詳細信息
    通訊作者:

    E-mail: xingwu@shu.edu.cn

  • 中圖分類號: TG142.71

Incremental learning of material absorption coefficient regression based on parameter penalty and experience replay

More Information
  • 摘要: 材料數據具有分批次、分階段制備的特點,并且不同批次數據的分布也不同,而神經網絡按批次學習材料數據時會存在平均準確率隨批次下降的問題,這為人工智能應用于材料領域帶來極大的挑戰。為解決這個問題,將增量學習應用于材料數據的學習上,通過分析模型參數的變化,建立了參數懲罰機制以限制模型在學習新數據時對新數據過擬合的現象;通過增強樣本空間多樣性,提出經驗回放方法應用于增量學習,將新數據與從緩存池中采樣得到的舊數據進行聯合訓練。進一步地,將所提方法分別應用在材料吸聲系數回歸和圖像分類任務上,實驗結果表明采用增量學習方法后,平均準確率分別提升了45.93%和2.62%,平均遺忘率分別降低了2.25%和7.54%。除此之外,還分析了參數懲罰和經驗回放方法中具體參數對平均準確率的影響, 結果顯示平均準確率隨著回放比例的增大而增大,隨著懲罰系數的增大先增大后減小。綜上所述,本文提出的方法能夠跨模態、任務進行學習,且參數設置靈活,可以根據不同環境和任務進行變動,為材料數據的增量學習提供了可行的方案。

     

  • 圖  1  多孔吸聲材料結構示意圖

    Figure  1.  Structure of the sound-absorbing material

    圖  2  材料數據在不同方法下的按批次學習結果折線圖. (a) 平均準確率; (b) 平均遺忘率; (c) 前向轉移; (d) 后向轉移

    Figure  2.  Line graph of incremental learning results for material data under different methods: (a) average accuracy; (b) average forgetting; (c) forward transfer; (d) backward transfer

    圖  3  材料吸聲系數回歸任務在不同設置下的平均準確率折線圖. (a) 材料吸聲系數回歸任務在經驗回放方法下不同參數的平均準確率折線圖; (b) 材料吸聲系數回歸任務在參數懲罰方法下不同參數的平均準確率折線圖

    Figure  3.  Line graph of the average accuracy of incremental learning of material data for different settings: (a) line graph of the average accuracy of incremental learning of material data for different parameters under experiential replay; (b) line graph of the average accuracy of incremental learning of material data with different parameters under parameter penalty

    表  1  CIFAR-10上進行的四組實驗的評價指標的平均值

    Table  1.   Mean values of the evaluation metrics for the four sets of experiments conducted on CIFAR-10

    MethodAverage accuracyAverage forgettingBackward transferForward transfer
    Base0.727811.22000.55790.4787
    PP0.73648.15000.53970.4630
    ER0.73928.06000.56190.4645
    PPER0.75403.68000.58610.4779
    LWF0.63784.42000.52970.4431
    MAS0.639724.90000.45660.4270
    下載: 導出CSV
    <span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    <span id="fpn9h"><noframes id="fpn9h">
    <th id="fpn9h"></th>
    <strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
    <th id="fpn9h"><noframes id="fpn9h">
    <span id="fpn9h"><video id="fpn9h"></video></span>
    <ruby id="fpn9h"></ruby>
    <strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    www.77susu.com
  • [1] Liang L S, Guo W L, Ma H Y, et al. Research progress of sound absorption performance prediction and sound absorption model of porous sound-absorbing materials. Mater Rep, 2022(23): 1

    梁李斯, 郭文龍, 馬洪月, 等. 多孔吸聲材料吸聲性能預測及吸聲模型研究進展. 材料導報, 2022(23):1
    [2] Ciaburro G, Iannace G, Ali M, et al. An artificial neural network approach to modelling absorbent asphalts acoustic properties. J King Saud Univ Eng Sci, 2021, 33(4): 213
    [3] Iannace G, Ciaburro G, Trematerra A. Modelling sound absorption properties of broom fibers using artificial neural networks. Appl Acous, 2020, 163: 107239 doi: 10.1016/j.apacoust.2020.107239
    [4] Zhai T T, Gao Y, Zhu J W. Survey of online learning algorithms for streaming data classification. J Softw, 2020, 31(4): 912 doi: 10.13328/j.cnki.jos.005916

    翟婷婷, 高陽, 朱俊武. 面向流數據分類的在線學習綜述. 軟件學報, 2020, 31(4):912 doi: 10.13328/j.cnki.jos.005916
    [5] Dong J Y, Yang X Y. Integration and optimization of material data mining and machine learning tools. Front Data &Comput, 2020, 2(4): 105

    董家源, 楊小渝. 材料數據挖掘與機器學習工具的集成與優化. 數據與計算發展前沿, 2020, 2(4):105
    [6] Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks. PNAS, 2017, 114(13): 3521 doi: 10.1073/pnas.1611835114
    [7] Mai Z D, Li R W, Jeong J, et al. Online continual learning in image classification: An empirical survey. Neurocomputing, 2022, 469: 28 doi: 10.1016/j.neucom.2021.10.021
    [8] Parisi G I, Kemker R, Part J L, et al. Continual lifelong learning with neural networks: A review. Neural Netw, 2019, 113: 54 doi: 10.1016/j.neunet.2019.01.012
    [9] Li Z Z, Hoiem D. Learning without forgetting. IEEE Trans Pattern Anal Mach Intell, 2018, 40(12): 2935 doi: 10.1109/TPAMI.2017.2773081
    [10] Zenke F, Poole B, Ganguli S. Continual learning through synaptic intelligence // Proceedings of the 34th International Conference on Machine Learning. Sydney, 2017: 3987
    [11] Chaudhry A, Dokania P K, Ajanthan T, et al. Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence // European Conference on Computer Vision. Munich, 2018: 556
    [12] Rebuffi S A, Kolesnikov A, Sperl G, et al. iCaRL: Incremental classifier and representation learning // Conference on Computer Vision and Pattern Recognition. Honolulu, 2017: 5533
    [13] Aljundi R, Caccia L, Belilovsky E, et al. Online continual learning with maximally interfered retrieval // Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, 2019: 11872
    [14] Aljundi R, Lin M, Goujaud B, et al. Gradient based sample selection for online continual learning // Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, 2019: 11817
    [15] Prabhu A, Torr P H S, Dokania P K. GDumb: A simple approach that questions our progress in continual learning // European Conference on Computer Vision. Glasgow, 2020: 524
    [16] Mallya A, Lazebnik S. PackNet: Adding multiple tasks to a single network by iterative pruning // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7765
    [17] Li X L, Zhou Y, Wu T, et al. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting // International Conference on Machine Learning. Long Beach, 2019: 3925
    [18] Lange M D, Aljundi R, Masana M, et al. A continual learning survey: Defying forgetting in classification tasks. IEEE Trans Pattern Anal Mach Intell, 2022, 44(7): 3366
    [19] Mai Z D, Li R W, Kim H, et al. Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning // Conference on Computer Vision and Pattern Recognition. Online, 2021: 1177
    [20] Hayes T L, Cahill N D, Kanan C. Memory efficient experience replay for streaming learning // International Conference on Robotics and Automation. Montreal, 2019: 9769
    [21] Liu Y Y, Su Y T, Liu A N, et al. Mnemonics training: Multi-class incremental learning without forgetting // 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, 2020: 12242
    [22] Chaudhry A, Dokania P K, Ajanthan T, et al. Riemannian walk for incremental learning: Understanding forgetting and intransigence // Proceedings of the European Conference on Computer Vision. Munich, 2018: 556
    [23] Lesort T, Lomonaco V, Stoian A, et al. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Inf Fusion, 2020, 58: 52 doi: 10.1016/j.inffus.2019.12.004
    [24] Lopez-Paz D, Ranzato M A. Gradient episodic memory for continual learning // Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, 2017: 6470
    [25] Aljundi R, Babiloni F, Elhoseiny M, et al. Memory aware synapses: Learning what (not) to forget // Proceedings of the European Conference on Computer Vision. Munich, 2018: 144
  • 加載中
圖(3) / 表(1)
計量
  • 文章訪問數:  286
  • HTML全文瀏覽量:  164
  • PDF下載量:  40
  • 被引次數: 0
出版歷程
  • 收稿日期:  2022-05-03
  • 網絡出版日期:  2022-09-13
  • 刊出日期:  2023-07-25

目錄

    /

    返回文章
    返回