English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42685284      線上人數 : 1337
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95414


    題名: 利用權重標準分流二進位神經網路做邊緣計算之影像辨識;Weight Standardization Fractional Binary Neural Network (WSFracBNN) for Image Recognition in Edge Computing
    作者: 梁字清;Liang, Zi-Qing
    貢獻者: 資訊工程學系
    關鍵詞: 人工智慧;模型辨識;邊緣計算;深度學習;二進制神經網路;影像辨識;模型壓縮;網路量化;Artificial Intelligence;Model Recognition;Edge Computing;Deep Learning;Binary Neural Networks;Image Recognition;Model Compression;Network Quantization
    日期: 2024-07-02
    上傳時間: 2024-10-09 16:47:08 (UTC+8)
    出版者: 國立中央大學
    摘要: 現今的模型為了得到更好的準確率會將網路設計的更加龐大,模型的運算量也成指數增長,在這個情況下要應用於邊緣計算相當有難度。而Binary Neural Networks (BNNs)二進制神經網路是將卷積核(Filter)權重和激勵值量化至1位元(Bit)的模型,這種模型非常適合ARM、FPGA等小晶片或其他邊緣計算裝置,為了設計一個對邊緣計算裝置更友善的模型,如何降低模型浮點數運算量起著重要的作用。Batch normalization (BN)是二進制神經網路的重要工具,然而在卷積層被量化至1位元(Bit)的情況下,BN層的浮點數計算成本變得較為高昂,本論文透過移除模型的BN層來降低浮點數運算量,並加入Scaled Weight Standardization Convolution(WS-Conv)方法來避免無BN層後準確率大幅降低的問題,並透過一系列的優化方式提升模型的性能。具體來說我們的模型在沒有BN層的情況下仍使模型的計算成本及準確度保持著競爭力,再加入一系列訓練方法讓模型在Cifar-100的準確率仍高於Baseline 0.6%,而總運算量則只有Baseline的46%,其中在BOPs不變的情況下FLOPs降低至接近0,使其更適合FPGA等嵌入式平台。;In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary Neural Networks (BNNs) are models that quantize the filter weights and activations to 1-bit. These models are highly suitable for small chips like ARM, FPGA, and other edge computing devices. To design a model that is more friendly to edge computing devices, it is crucial to reduce the floating-point operations (FLOPs). Batch normalization (BN) is an essential tool for binary neural networks; however, when convolution layers are quantized to 1-bit, the floating-point computation cost of BN layers becomes significantly high. This thesis aims to reduce the floating-point operations by removing the BN layers from the model and introducing the Scaled Weight Standardization Convolution (WS-Conv) method to avoid the significant accuracy drop caused by the absence of BN layers, and to enhance the model performance through a series of optimizations. Specifically, our model maintains competitive computational cost and accuracy even without BN layers. Furthermore, by incorporating a series of training methods, the model′s accuracy on CIFAR-100 is 0.6% higher than the baseline, while the total computational load is only 46% of the baseline. With unchanged BOPs, the FLOPs are reduced to nearly zero, making it more suitable for embedded platforms like FPGA.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML35檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明