中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93239
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 42701152      在线人数 : 1409
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93239


    题名: 以模板比對方式搜尋電子元件影像物件的深度學習系統;A deep learning system for searching targets in electronic component images with template matching style
    作者: 郭佩昇;Guo, Pei-Sheng
    贡献者: 資訊工程學系
    关键词: 模板比對;深度學習;孿生網路;電腦視覺;template matching;deep learning;siamese network;computer vision
    日期: 2023-07-25
    上传时间: 2024-09-19 16:50:03 (UTC+8)
    出版者: 國立中央大學
    摘要: 模板匹配 (template matching) 是傳統電腦視覺中一項重要的技術;但是該技術會受到搜尋物件的諸多變異因素影響;例如,模板與搜尋物件的大小變異、形狀變異、顏色變異、等因素就會嚴重影響搜尋的效果。近年來深度學習 (deep learning) 技術在電腦視覺上的應用日益普遍,且在諸多領域;例如,辨識、偵測、分割等,都獲得相當好的成果;因此本研究將藉由深度學習擷取高階特徵的功能,結合模板匹配的方式,完成不受上述因素影響的“模板匹配深度學習”技術。
    模板比對和物件偵測不同的是,在物件偵測中,模型是根據訓練時所學習到的特徵,去尋找在影像中是否有相似的特徵,但使用者無法自行提供任意物件,讓模型去搜尋該物件,若是要搜尋該物件,則需要再重新訓練一次;而模板比對能夠在由使用者提供任意模板影像後,判斷搜尋影像中的物件是否與模板影像中的物件相似,以此達到讓使用者搜尋想找的特定物件。
    本研究的目標是使用深度學習網路架構,找出特定形狀特徵的區塊,我們修改單物件追蹤網路 SiamCAR,將其變為多物件模板匹配,修改內容包括:i. 使網路能夠根據輸入的圖片大小而動態地調整,讓網路的輸入資料格式更具有彈性;ii. 將特徵提取子網路進行簡化及優化,除了效果變好也提升網路的速度;iii. 在網路進行匹配預測時使用較小的特徵圖,而不做上採樣,以此提升網路的速度;iv. 加入資料擴增,刻意製造模板影像與搜尋影像間的差異,讓網路可以學習到更多樣性的變化。
    在實驗中,我們使用印刷電路板上的物件進行訓練及測試;總共有11,525 對影像,訓練集有 9,267 對,測試集有 2,258 對。最終改進的匹配網路在測試集的召回率是 96.74%,精密度是 94.06%,F-score 是 95.38%,在速度上提升至 53 ms/img,與原本的 512 ms/img 相比提升約 9 倍左右。;Template matching is an important technique in traditional computer vision. However, this technique can be influenced by various factors when searching for objects. For example, variations in size, shape, and color between the template and the search object can severely affect the effectiveness of the search. In recent years, deep learning technology has become increasingly prevalent in computer vision and has achieved significant results in various fields such as recognition, detection, segmentation, and more. Therefore, this study aims to combine the functionality of deep learning in extracting high-level features with the template matching approach to develop a “template matching deep learning” technique that is robust against the aforementioned factors.
    Template matching and object detection differ in that, in object detection, the model is trained to search for similar features in an image based on the features it has learned during training. However, users cannot provide arbitrary objects for the model to search for. If the user wants to search for a specific object, the model needs to be trained again. On the other hand, template matching allows users to provide any template image, and the system determines whether the objects in the search image are similar to the objects in the template image. This enables users to search for specific objects they are looking for.
    The objective of this study is to utilize a deep learning network architecture to identify specific features within an image. We modified the single-object tracking network, SiamCAR, to perform multi-object template matching. Our modifications include: i. making the network dynamically adjust based on the input image size to enhance the flexibility of the network′s input data format, ii. simplifying and optimizing the feature extraction ubnetwork to improve both the performance and speed of the network, iii. using smaller feature maps for prediction instead of upsampling to improve network speed, iv. incorporating data augmentation and deliberately creating differences between template images and search images, the network can learn a greater variety of variations.
    In the experiments, we trained and tested our model using objects on printed circuit boards (PCBs). The dataset consisted of a total of 11,525 image pairs, with 9,267 pairs in the training set and 2,258 pairs in the testing set. The improved matching network achieved a recall rate of 96.74%, precision rate of 94.06%, and F-score rate of 95.38% on the testing set. Furthermore, the speed was improved to 53 ms/img, which is approximately 9 times faster compared to the original 512 ms/img.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML16检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明