中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93270
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 78937/78937 (100%)
造访人次 : 39880305      在线人数 : 1252
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93270


    题名: 基於關聯式學習的動態自適應推論及動態網路擴增;Adaptive Inference and Dynamic Network Accumulation based on Associated Learning
    作者: 彭彥霖;Peng, Yen-Lin
    贡献者: 資訊工程學系
    关键词: 關聯式學習;動態網路;提早輸出;動態推論;動態擴增模型;Associated Learning;Dynamic Neural Networks;Early Exit;Adaptive Inference;Dynamic Layer Accumulation
    日期: 2023-07-25
    上传时间: 2024-09-19 16:51:20 (UTC+8)
    出版者: 國立中央大學
    摘要: 關聯式學習(Associated Learning, AL)將傳統的多層類神經網路模組化成多個較小的區塊,每個區塊有各自的局部目標。這些目標互相獨立,因此AL可以同時地訓練不同層的參數以提高訓練模型時的效率。儘管AL架構在多種任務上已證實能達到不輸傳統神經網路的成績,AL仍有多項尚未實驗證實的優點。
    AL模型架構允許動態地增加模型層數,在不改變已訓練參數的狀況下,專注於訓練新增的AL層之參數,以達到更好的預測準率。相較之下,使用傳統神經網絡要動態地擴增參數量是非常的困難。
    此外,AL架構在各層區塊預留了冗餘的捷徑(Shortcuts),這些捷徑讓資料流在推論階段有多種路徑可選擇。
    本論文探討AL動態疊加層(Dynamic Layer Accumulation)、提早輸出(Early Exit)以及自適應推論(Adaptive Inference)的特性,實作改良版本的AL,並比較各種AL推論方法。
    我們更提出一種動態增加訓練特徵的架構,讓追加的AL層能夠額外接收原本AL層所沒有的特徵,以達到更好的訓練效果,我們的實驗使用多種經典的RNN及CNN模型作為AL架構的骨幹網路,並且在公開的文章分類及圖形分類資料集上實驗。;Associated Learning (AL) modularizes traditional multi-layer neural networks into smaller blocks, each with its local objective. These independent objectives enables AL to train parameters of different layers simultaneously and improve training efficiency. Despite achieving comparable performance to traditional neural networks in various tasks, AL possesses several unexplored advantages.
    The AL framework allows dynamic layer stacking, enabling the addition of AL layers without modifying the already trained parameters. This approach focuses on training the parameters of the newly added AL layers to achieve better prediction accuracy. In contrast, dynamically increasing the parameter size in traditional neural networks is challenging.
    Furthermore, the AL architecture incorporates redundant shortcuts at each layer block, providing multiple paths for data flow during the inference stage.
    This paper explores the characteristics of AL, including Dynamic Layer Accumulation, Early Exit, and Adaptive Inference, and implements improved versions of AL, comparing various AL inference methods.
    We propose a framework for dynamically increasing training features, allowing the appended AL layers to receive additional features not present in the original AL layers. This enhancement aims to improve training effectiveness.
    Since the design can incorporate new features without retraining the entire network, it improves the training effectiveness in a dynamic environment where new features may appear over time.
    Our experiments employ classic RNN and CNN models as the backbone networks for the AL architecture and conduct evaluations on publicly available text classification and image classification datasets.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML0检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明