中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93276
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42701977      Online Users : 1347
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/93276


    Title: 實現監督對比平行學習的參數同步更新、動態累加層、及前向捷徑;Realizing Synchronized Parameter Updating, Dynamic Layer Accumulation, and Forward Shortcuts in Supervised Contrastive Parallel Learning
    Authors: 何名曜;Ho, Ming-Yao
    Contributors: 資訊工程學系
    Keywords: 倒傳遞;反向鎖定;監督對比損失函數;管線化;平行化訓練;模型平行化;前向捷徑;提早退出;動態累加層;監督對比平行學習;Backpropagation;Backward Locking;Supervised Contrastive Loss;Pipeline;Parallel Learning;Model Parallelism;Forward Shortcut;Early Exit;Dynamic Layer Accumulation;Supervised Contrastive Parallel Learning
    Date: 2023-07-25
    Issue Date: 2024-09-19 16:52:05 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 端到端倒傳遞(End-to-End backpropagatio, BP)是現今深度學習技術的重要基石。然而,隨著深度學習網絡的逐漸變深,使得 BP 也面臨挑戰。監督對比平行學習(Supervised Contrastive Parallel Learning, SCPL) 是一種解偶 BP 的新方法,其透過多個局部訓練目標與監督對比學習方式,使原本的深層網路的長梯度流轉換為多個短梯度流,並透過管線化的設計使不同層中的參數獨立訓練,進而達到比 BP 更快的訓練速度,以此解決 BP 在反向傳播中因反向鎖定(Backward Locking)而導致時間效率不佳的現象。

    然而,SCPL 的原始論文並未實際實現平行化的參數訓外,也未在自然語言(NLP)領域中進行探討,因此本論文補足這些,並在視覺(Vision)領域與自然語言領域的資料集上進行準確率與平行化訓練時間的探討。藉此表現出本方法 SCPL 在這兩個領域中都能作為替代 BP 的一項新方法。此外,在研究的過程中發現一種 SCPL 的改進架構,使其可動態累加層(Dynamic Layer Accumulation)並有前向捷徑(Forward Shortcuts)與提早退出(Early Exit)的能力,本論文將這個新的架構稱為動態累加監督對比平行學習(Dynamic Accumulated Supervised Contrastive Parallel Learning, DASCPL)。也基於這兩個特性,使 DASCPL 比起 SCPL 有更高的彈性與靈活度,並與 SCPL 擁有一致的學習能力。;End-to-End backpropagation (BP) is a cornerstone of modern deep learning techniques. However, as deep learning networks grow deeper, training networks by BP becomes challenging. Supervised Contrastive Parallel Learning (SCPL) is a novel approach that decouples BP by multiple local training objectives and supervised contrastive learning. It transforms the original deep network′s long gradient flow into multiple short gradient flows and trains the parameters in different layers independently through a pipelined design. This method achieves faster training speed than BP by addressing the inefficiency caused by backward locking in backpropagation.

    However, the original paper on SCPL did not practically implement parallel parameter training nor explore its application in the field of Natural Language Processing (NLP). This paper supplements these aspects and examines the accuracy and parallel training time of SCPL on datasets in both the vision and NLP domains. It demonstrates that SCPL can be a new alternative to BP in both domains. Additionally, we improved the architecture of SCPL, which enables dynamic layer accumulation, forward shortcuts, and early exits. This new architecture is called Dynamic Accumulated Supervised Contrastive Parallel Learning (DASCPL). Based on these two features, DASCPL offers higher flexibility and adaptability compared to SCPL while maintaining consistent learning capabilities.
    Appears in Collections:[資訊工程研究所] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML20View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明