English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 43088773      線上人數 : 1229
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93446


    題名: 一種以卷積神經網路為基礎的具可解釋性的深度學習模型;A CNN-based Interpretable Deep Learning Model
    作者: 楊景豐;Yang, Ching-Feng
    貢獻者: 資訊工程學系
    關鍵詞: 可解釋的人工智慧;深度學習;視覺皮質;自我組織特徵映射;影像分類;Explainable Artificial Intelligence;Deep Learning;Visual Cortex;Self-Organizing Maps;Image Classification
    日期: 2023-08-09
    上傳時間: 2024-09-19 17:02:02 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,隨著人工智慧的迅速發展,人工智慧改變了我們的生活和許多領域,其造成的影響難以量化。在一些領域中的表現甚至已經超越了人類,例如圍棋、象棋、德州撲克等遊戲,但經常浮現出來的問題是,人工智慧的決策過程往往是黑箱,那麼它又是如何做出決定的呢?
    本研究提出了一種基於卷積神經網路的深度學習模型,利用大腦視覺皮質的運作方式和階層架構及時序性的概念,來解釋深度學習模型的決策過程。此模型使用多層的架構進行影像分類,當影像輸入後,透過高斯卷積及特徵增強的機制,並將影像特徵透過時序性進行結合並輸出至下一層,就如同如視覺皮質在接收影像訊號時的運作方式,底層神經元會將細小的資訊根據時序性進行結合,並透過階層性的結構進行傳遞,最後使用一層全連接層,將其輸出轉換為影像的分類結果。
    本實驗中,共使用兩種資料集,分別是 MNIST 和 Fashion-MNIST,皆有不錯的表現。在每一階段針對特徵進行解釋,並透過特徵視覺化,可以觀察到每一層的特徵都有獨特的意義,這對於可解釋的人工智慧具有重要意義,同時為機器學習和相關領域的發展提供了新的思路和方法。;In recent years, with the rapid development of artificial intelligence (AI), it has significantly transformed our lives and various domains, and its impact is difficult to quantify. AI has even surpassed humans in performance in certain areas such as Go, chess, and Texas Hold’em poker. However, the decision- making process of artificial intelligence (AI) is often considered a black box, raising the question of how it actually makes decisions.
    This research proposes a deep learning model based on convolutional neural networks (CNNs) that incorporates the concepts of multi-layer SOM and the functioning of the visual cortex in the human brain to provide interpretability to the decision-making process of deep learning models. This model uses a multi-layer architecture for image classification. When an image is inputted, it undergoes Gaussian convolution and feature enhancement mechanisms. The image features are then combined in a temporal sequence and propagated to the next layer, mimicking the operation of the visual cortex in processing visual signals. Lower-level neurons integrate fine-grained information and transmit it hierarchically through the network structure. Finally, a fully connected layer is used to convert the output into the classification result of the image.
    In our experiment, two datasets, namely MNIST and Fashion-MNIST, were used, both yielding favorable performance. At each stage, the features were explained, and through feature visualization, it was observed that each layer had its unique significance. This is of paramount importance for explainable AI, providing new insights and methods for the development of machine learning and related fields.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML14檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明