中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95823
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 42685412      在线人数 : 1356
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95823


    题名: SiamCATR:基於孿生網路之具有交叉注意力機制和通道注意力機制特徵融合的高效視覺追蹤神經網路;SiamCATR: An Efficient and Accurate Visual Tracking via Cross-Attention Transformer and Channel-Attention Feature Fusion Network Based on Siamese Network
    作者: 李俊霖;Lee, Chun-Lin
    贡献者: 電機工程學系
    关键词: 單目標視覺追蹤;神經網路模型;Single Visual Object Tracking;CNN-Transformer Architecture
    日期: 2024-08-13
    上传时间: 2024-10-09 17:18:43 (UTC+8)
    出版者: 國立中央大學
    摘要: 視覺目標追蹤任務在電腦視覺中一直是一個重要議題,廣泛應用於自動駕駛、監控系統、無人機等各個領域。其核心目的是在連續的影像序列中準確地跟蹤指定目標,即使在目標出現部分遮擋、光照變化、快速運動或背景複雜的情況下,依然能保持穩定的追蹤效果。隨著深度學習技術的快速發展,視覺目標追蹤網路也從傳統基於特徵匹配的方法演變為利用深度神經網路提取豐富特徵並進行目標追蹤。而近年來,受視覺變換器模型(Vision Transformer)在各種任務中取得成功的影響,視覺目標追蹤網路的性能也取得了顯著的進步,然而,在提升準確度與模型性能的同時,模型的參數量與運算量也大幅增加。由於視覺目標追蹤任務的實際應用往往部署在硬體資源有限的邊緣設備上,於是實時追蹤目標成為一個重大挑戰。因此,如何在保證模型準確度的同時實現高效輕量化的設計成為一個極具挑戰性的研究方向。在本論文中,我們提出了一種融合了卷積神經網路(CNN)和Transformer架構的混合模型,稱為SiamCATR。我們引入了基於Transformer架構的交叉注意力機制來增強模型對特徵圖相似特徵的表現,為了有效融合特徵,我們也引入了通道注意力機制深度互相關,使得目標在每個特徵通道都能被充分結合特徵,上述模組共同組成了高效的特徵融合網路。我們在多個視覺目標追蹤資料集上進行實驗與驗證。實驗結果證明,與當前基於高效輕量化設計的網路架構相比,我們所提出的架構取得最佳的準確度且達到實時追蹤的要求,證明了我們的模型在視覺目標追蹤任務中具有強大的競爭力。;Visual object tracking has been an important issue in computer vision, which is widely used in various fields such as autonomous driving, surveillance systems, and drones. Its core purpose is to accurately track a specified target in a continuous image sequence, and to maintain stable tracking effect even when the target is partially occluded, the light changes, the fast motion or the background is complex.
    With the rapid development of deep learning technology, visual object tracking networks have evolved from traditional feature-matching methods to leveraging deep neural networks to extract rich features for object tracking. Recently, influenced by the success of Vision Transformer models in various tasks, the performance of visual object tracking networks has also seen significant improvement. However, along with the increase in accuracy and performance, the number of parameters and computational load of these models has also grown substantially. Since the practical applications of visual object tracking tasks are often deployed on edge devices with limited hardware resources, real-time object tracking becomes a major challenge. Therefore, how to achieve high efficiency and lightweight design while ensuring model accuracy has become a highly challenging research direction. In this paper, we propose a hybrid model that combines Convolutional Neural Networks (CNN) and Transformer architecture, named SiamCATR. We introduce a cross-attention mechanism to enhance the model′s performance in identifying similar features in feature maps. To effectively integrate features, we incorporate a channel-attention depthwise cross-correlation mechanism, ensuring that targets can be fully combined within each feature channel. We conducted experiments on multiple visual object tracking datasets. The experimental results demonstrate that our proposed architecture achieves the best accuracy and meets the real-time tracking requirements compared to the current network architectures based on efficient and lightweight designs, proving the competitiveness of our model in visual object tracking tasks.
    显示于类别:[電機工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML64检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明