English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42701405      線上人數 : 1386
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86814


    題名: 使用OSNet結合人體視角估測的行人重識別;Person Re-Identification Using OSNet Combined with Human Body Orientation Estimation
    作者: 張予鴻;Hung, Chang Yu
    貢獻者: 資訊工程學系
    關鍵詞: 行人重識別;人體視角估測;人體姿態估測;Person Re-Identification;OSNet;Human Body Orientation Estimation;Pose Estimation
    日期: 2021-09-23
    上傳時間: 2021-12-07 13:15:31 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究提出一個創新的行人重識別方法,為了解決行人偵測所面臨的問題,在切割行人影像後,使用OpenPose關節點擷取方法,保留完整的行人影像,並計算左右腳踝關節點座標距離,取其距離最大值的影像作為代表影像,減少雙腳重疊所造成的遮擋。再來為了解決人體視角變化因素的影響,本研究將行人重識別結合人體視角估測,使用ResNet18模型預先將行人影像做人體視角分類,以提高後續識別結果的準確率。本研究採用OSNet模型來擷取行人特徵,該模型最大特點就是考慮了行人全尺度特徵,且在近年行人重識別領域中有相當好的識別效果。最後,本研究以自行建立的MIAT多視角行人資料集進行實驗,結合視角估測分類的方法其Rank-1為81%,mAP則為85%,相較於未經視角估測分類的方法,Rank-1提高22%,mAP則是提高17%。;In this study, an innovative Person Re-Identification method is proposed. To solve the problem of pedestrian detection, after cutting the pedestrian image, the OpenPose method is used to retain the complete pedestrian image and calculate the coordinates distance between the left and right ankle keypoints, and the image with the maximum distance is taken as the representative image to reduce the occlusion caused by the overlapping of the two legs. In order to solve the influence of viewpoint variation, this study combines Person Re-Identification with human body orientation estimation, and uses the ResNet18 model to pre-classify pedestrian images by human viewpoints to improve the accuracy of recognition results. This study uses the OSNet model to capture pedestrian features. This model considers the omni-scale features of pedestrians and has good performance in the field of Person Re-Identification in recent years. Finally, this study uses the self-established MIAT multi-view person dataset to conduct experiments, and combined viewpoint estimation method, the Rank-1 is 81% and the mAP is 85%, which is 22% higher in Rank-1 and 17% higher in mAP than the method without viewpoint estimation method.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML144檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明