中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93276
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42715780      線上人數 : 1449
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93276


    題名: 實現監督對比平行學習的參數同步更新、動態累加層、及前向捷徑;Realizing Synchronized Parameter Updating, Dynamic Layer Accumulation, and Forward Shortcuts in Supervised Contrastive Parallel Learning
    作者: 何名曜;Ho, Ming-Yao
    貢獻者: 資訊工程學系
    關鍵詞: 倒傳遞;反向鎖定;監督對比損失函數;管線化;平行化訓練;模型平行化;前向捷徑;提早退出;動態累加層;監督對比平行學習;Backpropagation;Backward Locking;Supervised Contrastive Loss;Pipeline;Parallel Learning;Model Parallelism;Forward Shortcut;Early Exit;Dynamic Layer Accumulation;Supervised Contrastive Parallel Learning
    日期: 2023-07-25
    上傳時間: 2024-09-19 16:52:05 (UTC+8)
    出版者: 國立中央大學
    摘要: 端到端倒傳遞(End-to-End backpropagatio, BP)是現今深度學習技術的重要基石。然而,隨著深度學習網絡的逐漸變深,使得 BP 也面臨挑戰。監督對比平行學習(Supervised Contrastive Parallel Learning, SCPL) 是一種解偶 BP 的新方法,其透過多個局部訓練目標與監督對比學習方式,使原本的深層網路的長梯度流轉換為多個短梯度流,並透過管線化的設計使不同層中的參數獨立訓練,進而達到比 BP 更快的訓練速度,以此解決 BP 在反向傳播中因反向鎖定(Backward Locking)而導致時間效率不佳的現象。

    然而,SCPL 的原始論文並未實際實現平行化的參數訓外,也未在自然語言(NLP)領域中進行探討,因此本論文補足這些,並在視覺(Vision)領域與自然語言領域的資料集上進行準確率與平行化訓練時間的探討。藉此表現出本方法 SCPL 在這兩個領域中都能作為替代 BP 的一項新方法。此外,在研究的過程中發現一種 SCPL 的改進架構,使其可動態累加層(Dynamic Layer Accumulation)並有前向捷徑(Forward Shortcuts)與提早退出(Early Exit)的能力,本論文將這個新的架構稱為動態累加監督對比平行學習(Dynamic Accumulated Supervised Contrastive Parallel Learning, DASCPL)。也基於這兩個特性,使 DASCPL 比起 SCPL 有更高的彈性與靈活度,並與 SCPL 擁有一致的學習能力。;End-to-End backpropagation (BP) is a cornerstone of modern deep learning techniques. However, as deep learning networks grow deeper, training networks by BP becomes challenging. Supervised Contrastive Parallel Learning (SCPL) is a novel approach that decouples BP by multiple local training objectives and supervised contrastive learning. It transforms the original deep network′s long gradient flow into multiple short gradient flows and trains the parameters in different layers independently through a pipelined design. This method achieves faster training speed than BP by addressing the inefficiency caused by backward locking in backpropagation.

    However, the original paper on SCPL did not practically implement parallel parameter training nor explore its application in the field of Natural Language Processing (NLP). This paper supplements these aspects and examines the accuracy and parallel training time of SCPL on datasets in both the vision and NLP domains. It demonstrates that SCPL can be a new alternative to BP in both domains. Additionally, we improved the architecture of SCPL, which enables dynamic layer accumulation, forward shortcuts, and early exits. This new architecture is called Dynamic Accumulated Supervised Contrastive Parallel Learning (DASCPL). Based on these two features, DASCPL offers higher flexibility and adaptability compared to SCPL while maintaining consistent learning capabilities.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML20檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明