博碩士論文 105523009 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:16 、訪客IP:3.147.103.234
姓名 黃竫(Ching Huang)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 採用ORB特徵的三百六十度視訊等距長方投影之行人追蹤
(Pedestrian Tracking using ORB Feature for Equirectangular Projection of 360-degree Videos)
相關論文
★ 應用於車內視訊之光線適應性視訊壓縮編碼器設計★ 以粒子濾波法為基礎之改良式頭部追蹤系統
★ 應用於空間與CGS可調性視訊編碼器之快速模式決策演算法★ 應用於人臉表情辨識之強健式主動外觀模型搜尋演算法
★ 結合Epipolar Geometry為基礎之視角間預測與快速畫面間預測方向決策之多視角視訊編碼★ 基於改良式可信度傳遞於同質區域之立體視覺匹配演算法
★ 以階層式Boosting演算法為基礎之棒球軌跡辨識★ 多視角視訊編碼之快速參考畫面方向決策
★ 以線上統計為基礎應用於CGS可調式編碼器之快速模式決策★ 適用於唇形辨識之改良式主動形狀模型匹配演算法
★ 以運動補償模型為基礎之移動式平台物件追蹤★ 基於匹配代價之非對稱式立體匹配遮蔽偵測
★ 以動量為基礎之快速多視角視訊編碼模式決策★ 應用於地點影像辨識之快速局部L-SVMs群體分類器
★ 以高品質合成視角為導向之快速深度視訊編碼模式決策★ 以運動補償模型為基礎之移動式相機多物件追蹤
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在360度視訊的電腦視覺應用中,物件追蹤(object tracking)可縮小待處理之感興趣區域(region of interest),而對於三百六十度視訊(360-degree video)的等距長方投影(equirectangular mapping projection, ERP),由於影像扭曲,造成大多數現有的物件追蹤演算法準確率大幅降低。因此,本論文基於粒子濾波器(particle filter),提出於球體域(sphere domain)預測目標物狀態,使運動模型(motion model)不須考量等距長方投影中幾何形變(geometric distortions)的問題。由於ERP為全景影像,因此需進行模數(modulus)處理,使得預測後粒子之位置避免落於ERP邊界之外。此外,由於Oriented Fast and Rotated BRIEF (ORB)特徵具有影像幾何失真(geometric distortions)之不變性,且計算複雜度較低,因此本論文採用ORB描述子(descriptor)以進行粒子濾波器之更正階段(correction stage)。實驗結果顯示,相較於快速L1追蹤器,本論文所提方案雖然於目標進入ERP影像邊界前的幾張畫面時,追蹤準確率略低於快速L1追蹤器,但是當目標進入ERP影像邊界後再出現在ERP影像另一側,或是當目標被遮蔽時,本論文所提方案追蹤準確率皆高於快速L1追蹤器。
摘要(英) For applications of 360-degree videos in computer vision, object tracking can reduce regions of interest to be processed. For equirectangular projection of 360-degree videos, the distorted images significantly decreases tracking accuracy. Hence, this thesis proposes to predict the target state in the sphere domain in the framework of particle filter. Accordingly, the motion model does not need to consider geometric distortions in the equirectangular mapping projection. Because images of equirectangular mapping projection are panorama images, the modulus operation avoids the positions of the predicted particles out of the boundary of the equirectangular mapping projection. In addition, since the Oriented Fast and Rotated BRIEF(ORB) feature is geometric distortions invariant and is featured with lower computation complexity, this thesis adopts the ORB descriptor at the correction stage of particle filter. Experimental results show that the proposed scheme outperforms the fast L1 tracker when the target appears at the other side of the equirectangular mapping projection after the target moves into the boundary or the target is occluded. However, the fast L1 tracker slightly outperforms the proposed scheme before the target moves into the boundary.
關鍵字(中) ★ 行人追蹤
★ 三百六十度視訊
★ 等距長方投影
★ 粒子濾波器
★ ORB特徵
關鍵字(英) ★ pedestrian tracking
★ 360-degree videos
★ equirectangular mapping projection
★ particle filter
★ ORB feature
論文目次 摘要…………………………………………………………………………………………….I
Abstract………………………………………………………………………………………..II
誌謝…………………………………………………………………………………………...III
目錄…………………………………………………………………………………………..IV
圖目錄………………………………………………………………………………………..VI
表目錄………………………………………………………………………………………..IX
第一章 緒論…………………………………………………………………………………...1
1.1 前言…………………………………………………………………………………..1
1.2 研究動機……………………………………………………………………………..1
1.3 研究方法……………………………………………………………………………..3
1.4 論文架構……………………………………………………………………………..3
第二章 以貝氏濾波器為基礎之物件追蹤技術介紹………………………………………...4
2.1 貝氏濾波器(Bayes Filter)……………………………………………………………4
2.1 採用以色彩為基礎的適應性粒子濾波器之物件追蹤(Object Tracking with an Adaptive Color Based Particle Filter)……………………………………………….6
2.3 總結…………………………………………………………………………………11
第三章 全景影像(Panorama Image)的物件追蹤技術現況……………………...…………12
3.1 全向鏡影像追蹤(Omnidirectional Image Tracking)…….…………………………12
3.2 立方體投影之影像追蹤(Image Tracking on Cube mapping Projection)……….….14
3.3 等距長方圖投影之影像追蹤(Image Tracking on Equirectangular Mapping Projection).................................................................................................................15
3.4 總結…………………………………………………………………………………18
第四章 本論文所提之三百六十度視訊等距長方投影之行人追蹤方案………………….19
4.1 系統架構……………………………………………………………………………20
4.2 座標域轉換與預測(Domain Transform and Prediction)………….………………..21
4.3 於更正階段使用ORB特徵匹配進行相似度計算(Compute Similarities by ORB Feature Matching in Correction Stage)…………………………………….……….25
4.4 在兩區域中狀態估測(State Estimation in Two Regions)…...……………………..32
4.5 總結…………………………………………………………………………………33
第五章 實驗結果與討論…………………………………………………………………….34
5.1 實驗參數與測試影片規格與快速L1追蹤器簡介………………….………….…34
5.2 追蹤系統實驗結果…………………………………………………………………36
5.2.1 均方根誤差(Root Mean Square Error)之追蹤準確率………………..…….37
5.2.2 重疊率(Overlap Ratio)之追蹤準確率…………....……………….………...51
5.2.3 時間複雜度(Time Complexity)……………………………………………..54
5.3 總結…………………………………………………………………………………55
第六章 結論與未來展望…………………………………………………………………….56
參考文獻……………………...………………………………………………………………57
參考文獻 [1] F. Rameau, D.D. Sidibe, C. Demonceaux, and D. Fofi, “Visual tracking with omnidirectional cameras: an efficient approach,” Electronics Letters, Vol. 47, No. 21, pp. 1183-1184, October 2011.
[2] Y. Tang, Y. Li, Senior Member, Shuzhi Sam Ge, Jun Luo, and Hongliang Ren, “Parameterized distortion-invariant feature for robust tracking in omnidirectional vision,” IEEE Trans. on Automation Science And Engineering, Vol. 13, No. 2, pp. 743-756, April 2016.
[3] G. Tong and J. Gu, “Locating objects in spherical panoramic images,” in Proceedings of IEEE International Conference on Robotics and Biomimetics, pp. 818-823, December 2011.
[4] B. S. Kim and J. S. Park, “Estimating deformation factors of planar patterns in spherical panoramic images,” Multimedia Systems, Vol. 23, DOI 10.1007/s00530-016-0513-x, pp. 607-625, April 2016.
[5] K. Chi Liu, Y. T. Shen, and L. G. Chen, “Simple online and realtime tracking with spherical panoramic camera,” in Proceedings of IEEE International Conference on Consumer Electronics, National Taiwan University, pp. 1-6, Jan. 2018.
[6] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Vol. abs/1612.08242, pp. 6517-6525, July 2017. [Online]. Available: http://arxiv.org/abs/1612.08242.
[7] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in Proceedings of IEEE International Conference on Image Processing, Vol. abs/1703.07402, pp. 3645-3649, September 2017. [Online]. Available: http://arxiv.org/abs/1703.07402.
[8] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proceedings of International Conference on Computer Vision, Willow Garage, Menlo Park, California, pp. 256-2571, Nov. 2011.
[9] E. Karami, S. Prasad, and M. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: Performance comparison for distorted images,” in Proceedings of Newfoundland Electrical and Computer Engineering Conference, November 2015.
[10] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of the European Conference on Computer Vision, pp. 404-417, May 2006.
[11] C. Bao, Y. Wu, H. Ling, and H. Ji, “Real time robust l1 tracker using accelerated proximal gradient approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1830-1837, June 2012.
[12] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Computing Surveys, Vol. 38, No. 4, pp. 1-45, December 2006.
[13] N. J. Gordon, D. J. Salmond, and A. F. M. Smith, “Novel approach to nonlinear/non-Gaussian Bayesian state estimation,” IEE Proceedings F - Radar and Signal Processing, Vol. 140, pp. 107-113, April 1993.
[14] M. S. Arulampalam, S. Maskell, N. Gordan, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. signal processing, Vol. 50, No. 2, pp. 174-188, Feb. 2002.
[15] K. Nummiaro, E. Koller-Meier, and L. V. Gool, “An adaptive color-based particle filter,” Image and Vision Computing, Vol. 21, No. 1, pp. 99-110, Jan. 2003.
[16] F. Wallhoff, M. Zobl, and G. Rigoll, “Face tracking in meeting room scenarios using omnidirectional views,” in Proceedings of IEEE Conf. on Pattern Recognition, Washington, DC, USA, Vol. 4, pp. 933–936, Aug. 2004.
[17] Z. Zhou, B. Niu, C. Ke, and W. Wu, “Static object tracking in road panoramic videos,” in Proceedings of IEEE International Symposium on Multimedia, pp. 57-64, Dec. 2010.
[18] M. Budagavi, J. Furton, G. Jin, A. Saxena, J. Wilkinson, and A. Dickerson, “360 degrees video coding using region adaptive smoothing,” in Proceedings of IEEE International Conference on Image Processing, pp. 750-754, September 2015.
[19] C. Chen, M.-Y. Liu, O. Tuzel, and J. Xiao, “R-CNN for Small Object Detection,” Springer International Publishing, pp. 214–230, March 2017.
[20] H. Hu, Y. Lin, M. Liu, H. Cheng, Y. Chang, and M. Sun, “Deep 360 pilot: Learning a deep agent for piloting through 360-degree sports video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Vol. abs/1705.01759, pp. 1396-1405, July 2017.
[21] P. Torr and A. Zisserman, “Robust computation and parametrization of multiple view relations,” in Proceedings of International Conference on Computer Vision, pp. 727-732, Jan. 1998.
[22] Y. Ye, E. Alshina, and J. Boyce, ”Algorithm descriptions of projection format conversion and video quality metrics in 360Lib Version 4,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting: Torino, IT, 13–21 July 2017.
[23] X. Corbillon, F. De Simone, and G. Simon, “360-degree video head movement dataset,” in Proceedings of ACM Multimedia Systems, pp. 1-4, June 2017.
[24] https://www.mettle.com/360vr-master-series-free-360-downloads-page.
[25] F. Duanmu, Y. Mao, S. Liu, S. Srinivasan and Y. Wang, “A subjective study of viewer navigation behaviors when watching 360-degree videos on computers,” in Proceedings of IEEE International Conference on Multimedia Expo, San Diego, California, USA, Feb. 2018.
[26] X. Mei and H. Ling, “Robust visual tracking using l1 minimization,” in Proceedings of IEEE International Conference on Computer Vision, pp. 1436-1443, Kyoto, Japan, Sep.-Oct. 2009.
[27] Y. Wu, J.W. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 2411–2418, June 2013.
指導教授 唐之瑋(Chih-Wei Tang) 審核日期 2018-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明