中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95777
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42686039      Online Users : 1391
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/95777


    Title: 以卷積神經網路為基礎之新型可解釋性深度學習模型;A New CNN-Based Interpretable Deep Learning Model
    Authors: 凃建名;Tu, Chien-Ming
    Contributors: 資訊工程學系
    Keywords: 可解釋人工智慧;深度學習;色彩感知;彩色影像;Explainable Artificial Intelligence;Deep Learning;Color Perception;Color Images
    Date: 2024-08-12
    Issue Date: 2024-10-09 17:16:07 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 隨著深度學習技術在各領域的廣泛應用,可解釋性模型的重要性日益突顯。
    可解釋性模型,不僅能增強使用者對模型的信任,還能在出現異常時提供有價值的建議。

    本研究提出了基於卷積神經網路的新型可解釋性深度學習模型,
    該模型包括色彩感知區塊、輪廓感知區塊和特徵傳遞區塊三大部分。
    色彩感知區塊透過計算輸入影像不同部分的平均色彩與30種基礎色彩的相似度來提取輸入影像的顏色特徵,
    輪廓感知區塊則透過前處理將彩色影像變成灰階影像並輸入高斯卷積與特徵增強來檢測影像中的輪廓特徵,
    特徵傳遞區塊則將輸入特徵進行高斯卷積與特徵增強後並且將
    輸入特徵透過時序性合併的方式組成更完整的特徵輸出到下一層直到傳遞至全連接層,
    最後將輸出的色彩特徵與輪廓特徵結合後輸入進全連接層進行分類。

    本研究一共主要使用了三種資料集分別是MNIST、Colored MNIST和Colored Fashion MNIST資料集,
    在MNIST、Colored MNIST、Colored Fashion MNIST資料集的測試準確率
    分別為 0.9566、0.954、0.8223。
    通過實驗結果表明,
    本研究之模型在可解釋性和性能方面均有不錯的表現。
    尤其在Colored MNIST和Colored Fashion MNIST資料集上,
    模型不僅能夠準確區分不同顏色和形狀的影像,
    還能透過視覺化展示模型內部決策邏輯,
    從而驗證其可解釋性和實用性。;Interpretable models are becoming more and more important as deep learning technologies are being widely used in a variety of sectors.
    Despite their excellent accuracy, "black-box" models frequently obfuscate the decision-making process.
    Conversely, interpretable models not only increase users′ confidence in the model but also offer insightful information when anomalies occur.

    This research proposes a new CNN-based interpretable deep learning model.
    The model comprises three kind main components: color perception block, contour perception block, and feature transmission block.
    The color perception block extracts color features from the input image by calculating the similarity between the average color of different parts of the input image and 30 basic colors.
    The contour perception block detects contour features in the image by converting the color image to grayscale through preprocessing and then applying Gaussian convolution and feature enhancement.
    The feature transmission block combines the input features by space merging module after convolution and response filtering modules to create more complete features, which are then passed to the next layer until they reach fully connected layer.
    Finally, the output color features and contour features are combined and pass into the fully connected layer for classification.

    There are three key datasets used in this study, MNIST, Colored MNIST, and Colored Fashion MNIST.
    The accuracy rates of MNIST, Colored MNIST, and Colored Fashion MNIST are 0.9566, 0.954, and 0.8223.
    The outcomes of the experiments show how well the suggested model performs in terms of both interpretability and performance.
    The model validates its interpretability and practicality by accurately distinguishing images of different colors and forms, especially on the Colored MNIST and Colored Fashion MNIST datasets. Additionally, the model visualizes the internal decision-making logic of the model.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML38View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明