博碩士論文 985403009 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:29 、訪客IP:3.138.138.202
姓名 郭千豪(Chien-Hao Kuo)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 利用骨架資訊實現多重深度攝影機之人物追蹤與行為辨識
(People Tracking and Behavior Recognition from Multiple Depth Cameras Using Skeleton Joints)
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 人物追蹤與行為辨識已經在娛樂,機器人,監控系統等眾多領域皆發揮關鍵角色的作用。為了使人物追蹤和行為辨識方法得到廣泛地應用,用戶的便利性,安裝的簡易性和設備的合理價格為主要考量的因素。 傳統上,人體運動可用配戴感測器或穿戴型裝置方式捕捉,然而,使用者整天佩帶感測器所造成的額外負擔和使用者可能未配戴感測器,使得此方法較不方便也不牢靠。
另一種方法則是從攝影機獲得的影像進行人物追蹤和識別人類行為。但是,當使用攝影機拍攝二維影像時,影像中人物可能被其他物體所遮蔽,且人物外觀也可能會在三維立體影像中出現多種的可能,深度資訊將會失去,而且結果也可能會受到光影變化的影響。
在本論文中則是利用多重深度攝影機,以克服單一攝影機影像方法的限制,但是遮蔽狀況仍然會降低這種方法的準確度。為了解決這些問題,我們提出了利用來自多重攝像機的人物追蹤軌跡匹配方法用於人物追蹤,使用曲線分群來合成追蹤軌跡。對於行為辨識,我們結合時變基底向量與基於遮蔽權重分配的生成,提出了多重深度相機的時變骨架向量投影的架構。實驗結果顯示,在實際測試環境中與其他最先進的方法相比,本論文提出的方法在人物追蹤失真較少,雖然行為辨識準確度相當,但可更有效地降低計算複雜度。
摘要(英) People tracking and behavior recognition has been emerged to play critical roles in numerous areas including entertainment, robotics, surveillance, etc. In order to make an approach of people tracking and behavior recognition to be widely used, the convenience to users, the simplicity in installation, and the reasonable prices for equipment are the main factors to be considered. The conventional work of capturing human motion is wearing sensors, however, the extra burdens of wearing sensors all of the time and sensors could go unworn, making the task unreliable.
Tracking and recognizing human behavior from images obtained by a monocular camera may be an option. However when taking a 2-D picture of a scene with a monocular camera, the appearance of a person in a 2-D image might pose many possible configurations in 3-D, the depth information will be loose and results could be affect by the lighting conditions. In this dissertation, another solution is concerned with the uses of multiple depth camera to overcome the limitations of the monocular image-based approach, but occlusions still reduce such methods’ accuracy.
To address these problems, we propose a pairwise trajectory matching scheme from multiple cameras for people tracking, using curve clustering to fuse the tracking trajectories. For behavior recognition, a time-variant skeleton vector projection scheme using multiple infrared-based depth cameras is developed by combined proposed time-variant basis vector and proposed occlusion-based weighting element generation. The experiment results shows the proposed method achieves less tracking distortion, superior behavior recognition accuracy and involves less computational complexity compared with other state-of-the-art methods for practical testing environments.
關鍵字(中) ★ 行為辨識
★ 人物追蹤
★ 骨架節點
★ 多重攝影機
★ 深度攝影機
★ 監視系統
關鍵字(英) ★ Behavior recognition
★ People tracking
★ Skeleton
★ Joint
★ Multiple depth cameras
★ Kinect
★ Surveillance
論文目次 摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Research Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 People Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Behavior Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 People Tracking Using Pairwise Trajectory Matching Scheme . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Hand-Gesture-Triggered Geometry Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.1 Temporal Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2 Relative Hand Joint Issue among Multiple Cameras . . . . . . . . . . . . . . . . . . . . 14
3.2 Proposed People Tracking System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Interleaving-Based Skeletal Joints Obtaining with Valid Skeleton Determination . . . . . 15
3.2.1.1 Interleaving-Based Skeleton Obtaining . . . . . . . . . . . . . . . . . . . . 15
3.2.1.2 Moving Average Based Valid Skeleton Determination . . . . . . . . . . . . . . 16
3.2.2 Multi-Trajectory Matching Using Occlusion Management . . . . . . . . . . . . . . 18
3.2.2.1 Multiple Cameras Projection . . . . . . . . . . . . . . . . . . . . . . 18
3.2.2.2 Occlusion Detection: Multiple Points in One Region . . . . . . . . . . 19
3.2.2.3 Kalman Filter for Multiple-Object Tracking . . . . . . . . . . . . . . 20
3.2.2.4 Pairwise Trajectory Matching . . . . . . . . . . . . . . . . . . . . . 22
3.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 Calibration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.2 Tracking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3.2.1 Performance Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.2.2 Subjective Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.2.3 Objective Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2.4 Extensive Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.3 Time Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Behavior Recognition Based on a Time-Variant Skeleton Vector Projection . . . . . . . . . . . . . . . . . . . 41
4.1 Relative Joint Position with a Normalization Process . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2 Basis Vectors Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3 Projection of Joint Vector Onto the Basis Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 Behavior Classifier Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4.1 Occlusion-Based Weighting Element Generation . . . . . . . . . . . . . . . . . . . . . . . 51
4.5 Behavior Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.6.1 Quantitative Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.6.2 Qualitative Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6.3 Complexity Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
參考文獻 [1] A. Zenonos, A. Khan, G. Kalogridis, S. Vatsikas, T. Lewis, and M. Sooriyabandara, “Healthyoffice: Mood recognition at work using smartphones and wearable sensors,” in IEEE International Conference on Pervasive Computing and Communication Workshops, Mar. 2016, pp. 1–6.
[2] Moving to 2026, the Innovation, Creation, and Entrepreneurial Contest, Ministry of Education, Taiwan, 2016 (the Best Innovation Award, the Best Technology Integration Award, Advisor: Prof. S.W. Sun). (https://youtu.be/YIjtJJ8yXVA).
[3] Kinect, https://developer.microsoft.com/en-us/windows/kinect.
[4] Kinect for windows sdk 1.8, https://www.microsoft.com/en-us/download/details.aspx?id=40278.
[5] I. Haritaoglu, D. Harwood, and L. Davis, “W4: Who? when? where? what? a real time system for detecting and tracking people,” in Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Apr. 1998, pp. 222–227.
[6] R. T. Collins, “Mean-shift blob tracking through scale space,” in Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition, vol. 2, Jun. 2003, pp. II-234–240.
[7] M. Han, W. Xu, H. Tao, and Y. Gong, “An algorithm for multiple object trajectory tracking,” in Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition, vol. 1, Jun. 2004, pp. I-864–I-871.
[8] D. Comaniciu, V. Ramesh, and P. Meer, “Real-time tracking of non-rigid objects using mean shift,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2000, pp. 142–149.
[9] S. Khan and M. Shah, “Tracking people in presence of occlusion,” in Asian Conference on Computer Vision, 2000, pp. 1132–1137.
[10] Q. Cai and J. K. Aggarwal, “Automatic tracking of human motion in indoor scenes across multiple synchronized video streams,” in Sixth International Conference on Computer Vision, Jan. 1998, pp. 356–362.
[11] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images,” in IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2011, pp. 1297–1304.
[12] O. Ozturk, T. Yamasaki, and K. Aizawa, “Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis,” in IEEE 12th International Conference on Computer Vision Workshops, Sep. 2009, pp. 1020–1027.
[13] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, Nov. 2004.
[14] M. Baum, F. Faion, and U. Hanebeck, “Tracking ground moving extended objects using rgbd data,” in IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems, Sep. 2012, pp. 186–191.
[15] W. Hu, M. Hu, X. Zhou, T. Tan, J. Lou, and S. Mayban, “Principal axis-based correspondence between multiple cameras for people tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 4, pp. 663–671, Apr. 2006.
[16] K. Bradshaw, I. Reid, and D. Murray, “The active recovery of 3d motion trajectories and their use in prediction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 3, pp. 219–234, Mar. 1997.
[17] L. Lee, R. Romano, and G. Stein, “Monitoring activities from multiple video streams: Establishing a common coordinate frame,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 758–767, Aug. 2000.
[18] S. M. Khan and M. Shah, “Tracking multiple occluding people by localizing on multiple scene planes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 3, pp. 505–519, Mar. 2009.
[19] J. Berclaz, F. Fleuret, E. Turetken, and P. Fua, “Multiple object tracking using k-shortest paths optimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 9, pp. 1806–1819, Sep. 2011.
[20] F. Fleuret, J. Berclaz, and R. L. P. Fua, “Multi-camera people tracking with a probabilistic occupancy map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 267–282, Feb. 2008.
[21] T. Bagautdinov, F. Fleuret, and P. Fua, “Probability occupancy maps for occluded depth images,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015.
[22] O. Jafari, D. Mitzel, and B. Leibe, “Real-time rgb-d based people detection and tracking for mobile robots and head-worn cameras,” in Proceedings of IEEE International Conference on Robotics and Automation, 2014, pp. 5636–5643.
[23] M. Munaro and E. Menegatti, “Fast rgb-d people tracking for service robots,” Journal on Autonomous Robots, vol. 37, no. 3, pp. 227–242, 2014.
[24] I. Mikic, S. Santini, and R. Jain, “Video processing and integration from multiple cameras,”in Proceedings of the Image Understanding Workshop, 1998, pp. 183–187.
[25] J. Black, T. Ellis, and P. Rosin, “Multi view image surveillance and tracking,” in Proceedings of Workshop on Motion and Video Computing, Dec. 2002, pp. 169–174.
[26] J. Giebel, D. M. Gavrila, and C. Schnorr, “A bayesian framework for multi-cue 3d object tracking,” in Proceedings of European Conference on Computer Vision, 2004, pp. 241–252.
[27] K. Smith, D. Gatica-Perez, and J. Odobez, “Using particles to track varying numbers of interacting people,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, Jun. 2005, pp. 962–969.
[28] S. Lee, “Real-time camera tracking using a particle filter combined with unscented kalman filters,” Journal Electronic Imaging, vol. 23, no. 1, pp. 013029-1–013029-18, Jan. 2014.
[29] S. Gruenwedel, V. Jelaca, J. O. Nino-Castaneda, P. v. Hese, D. v. Cauwelaert, D. v. Haerenborgh, P. Veelaert, and W. Philips, “Low-complexity scalable distributed multicamera tracking of humans,” ACM Trans. Sen. Netw., vol. 10, no. 2, 24:1–24:32, Jan. 2014.
[30] K. Sharma and I. Moon, “Improved scale-invariant feature transform feature- matching technique-based object tracking in video sequences via a neural network and kinect sensor,” Journal Electronic Imaging, vol. 22, no. 3, pp. 033017-1–033017-14, Jul. 2013.
[31] G. Sprint, D. Cook, R. Fritz, and M. Schmitter-Edgecombe, “Detecting health and behavior change by analyzing smart home sensor data,” in IEEE International Conference on Smart Computing, May 2016, pp. 1–3.
[32] C. Shen, Y. Chen, and G. Yang, “On motion-sensor behavior analysis for human-activity recognition via smartphones,” in IEEE International Conference on Identity, Security and Behavior Analysis, Feb. 2016, pp. 1–6.
[33] J. G. Lee, M. S. Kim, T. M. Hwang, and S. J. Kang, “A mobile robot which can follow and lead human by detecting user location and behavior with wearable devices,” in IEEE International Conference on Consumer Electronics, Jan. 2016, pp. 209–210.
[34] Q. Sun, F. Hu, and Q. Hao, “Human movement modeling and activity perception based on fiber-optic sensing system,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 6, pp. 743–754, Dec. 2014.
[35] M. Pham, D. Yang, W. Sheng, and M. Liu, “Human localization and tracking using distributed motion sensors and an inertial measurement unit,” in IEEE International Conference on Robotics and Biomimetics, Dec. 2015, pp. 2127–2132.
[36] D. Tao, L. Jin, Y. Wang, and X. Li, “Rank preserving discriminant analysis for human behavior recognition on wireless sensor networks,” IEEE Transactions on Industrial Informatics, vol. 10, no. 1, pp. 813–823, Feb. 2014.
[37] S. Tan and J. Yang, “Fine-grained gesture recognition using wifi,” in IEEE Conference on Computer Communications Workshops, Apr. 2016, pp. 257–258.
[38] F. Lv and R. Nevatia, “Recognition and segmentation of 3-d human action using hmm and multi-class adaboost,” in Proceedings of the European Conference on Computer Vision, 2006, pp. 359–372.
[39] L. R. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, Feb. 1989.
[40] Y. Freund and R. Schapire, “A decision theoretic generalization of on-line learning and application to boosting,” Journal of Computer and System Science, vol. 55, no. 1, pp. 119–139, 1995.
[41] Y. Sheikh, M. Sheikh, and M. Shah, “Exploring the space of a human action,” in IEEE International Conference on Computer Vision, vol. 1, Oct. 2005, pp. 144–149.
[42] M. Hussein, M. Torki, M. Gowayyed, and M. El-Saban, “Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations,” in Proceedings of the International Joint Conference on Artificial Intelligence, 2013, pp. 2466–2472.
[43] J. Wang, Z. Liu, Y. Wu, and J. Yuan, “Mining actionlet ensemble for action recognition with depth cameras,” in IEEE Conference onComputer Vision and Pattern Recognition, Jun. 2012, pp. 1290–1297.
[44] C. Chang and C. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, 27:1–27:27, 3 2011, Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[45] L. Xia, C. C. Chen, and J. K. Aggarwal, “View invariant human action recognition using histograms of 3d joints,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2012, pp. 20–27.
[46] X. Yang and Y. L. Tian, “Eigenjoints-based action recognition using naive-bayes-nearestneighbor,”in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2012, pp. 14–19.
[47] Y. Zhu, W. Chen, and G. Guo, “Fusing spatiotemporal features and joints for 3d action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2013, pp. 486–491.
[48] I. Laptev and T. Lindeberg, “Velocity adaptation of space-time interest points,” in Proceedings of the International Conference on Pattern Recognition, vol. 1, Aug. 2004, pp. 52–56.
[49] G. Willems, T. Tuytelaars, and L. V. Gool, “An efficient dense and scaleinvariant spatiotemporal interest point detector,” in Proceedings of the European Conference on Computer Vision: Part II, 2008, pp. 650–663.
[50] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” in IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2008, pp. 1–8.
[51] A. Klaser, M. Marszalek, and C. Schmid, “A spatio-temporal descriptor based on 3dgradients,”in British Machine Vision Conference, Sep. 2008, 275:1–10.
[52] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
[53] R. Chaudhry, F. Ofli, G. Kurillo, R. Bajcsy, and R. Vidal, “Bio-inspired dynamic 3d discriminative skeletal features for human action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2013, pp. 471–478.
[54] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 509–522, Apr. 2002.
[55] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet, “Simplemkl,” Journal of Machine Learning Research, vol. 9, pp. 2491–2521, 2008.
[56] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, “Sequence of the most informative joints (smij): A new representation for human skeletal action recognition,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2012, pp. 8–13.
[57] V. I. Levenshtein, “Binary codes capable of correcting deletions, insertions and reversals,”Soviet Physics Doklady, vol. 10, p. 707, 1966.
[58] E. Ohn-Bar and M. M. Trivedi, “Joint angles similarities and hog2 for action recognition,”in IEEE Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2013, pp. 465–470.
[59] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, Jun. 2005, pp. 886–893.
[60] G. Evangelidis, G. Singh, and R. Horaud, “Skeletal quads: Human action recognition using joint quadruples,” in International Conference on Pattern Recognition, Aug. 2014, pp. 4513–4518.
[61] T. Jaakola and D. Haussler, “Exploiting generative models in discriminative classifiers,”in Proceedings of the Conference on Advances in Neural Information Processing Systems II, 1999, pp. 487–493.
[62] R. Vemulapalli, F. Arrate, and R. Chellappa, “Human action recognition by representing 3d skeletons as points in a lie group,” in IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2014, pp. 588–595.
[63] N. A. Azis, H. J. Choi, and Y. Iraqi, “Substitutive skeleton fusion for human action recognition,” in International Conference on Big Data and Smart Computing, Feb. 2015, pp. 170–177.
[64] N. A. Azis, Y. S. Jeong, H. J. Choi, and Y. Iraqi, “Weighted averaging fusion for multiview skeletal data and its application in action recognition,” IET Computer Vision, vol. 10, no. 2, pp. 134–142, 2016.
[65] A. Maimone and H. Fuchs, “Reducing interference between multiple structured light depth sensors using motion,” in IEEE Virtual Reality, Mar. 2012, pp. 51–54.
[66] S. W. Sun, H. Y. Lo, H. J. Lin, Y. S. Chen, F. Huang, and H. Y. M. Liao, “A multiple structured light sensors tracking system that can always select a better view to perform tracking,” in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2009.
[67] G. Welch and G. Bishop, “An introduction to the kalman filter,” University of North Carolina at Chapel Hill, Chapel Hill, NC, USA, Tech. Rep. 95–041, 1995.
[68] J. Munkres, “Algorithms for the assignment and transportation problems,” Journal of the Society for Industrial and Applied Mathematics, vol. 5, no. 1, pp. 32–38, 1957.
[69] S. J. Gaffney, “Probabilistic curve-aligned clustering and prediction with regression mixture models,” PhD thesis, Department of Computer Science, Department of Computer Science, University of California, Irvine, 2004.
[70] M. A. Fischler and R. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.
[71] http://cvlab.epfl.ch/software/pom.
指導教授 張寶基 孫士韋(Pao-Chi Chang Shih-Wei Sun) 審核日期 2018-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明