摘要: | 性別辨識是近年廣泛發展的一個領域,若能在日常生活中應用性別辨識,如:智慧型安全監控系統、個人化機器人與顧客和行人統計系統等,生活將會更方便且安全。 長久以來的性別辨識研究多以人臉、聲音、姿態等為基礎,其中以姿態為基礎的性別辨識為有效率且可行的,在側面拍攝角度為〖90〗^°且一般穿著時有不錯的表現,但隨著姿態影像會因視角變化、穿著變化與攜帶背包等問題而改變,當中更伴隨姿態週期取得不易,以至於辨識率大幅降低。因此本論文提出一個以Gait Energy Image(GEI)為基礎,在GEI上擷取區域紋理特徵後,以Support Vector Machine(SVM)為分類器的性別辨識系統架構,以提升其辨識率。 首先將攝影機所拍攝之影片,經由姿態週期計算方法,取得姿態能量影像(GEI),在GEI上以特徵擷取Local Directional Pattern (LDP)或Local Binary Pattern (LBP)計算紋理特徵,並以切割區塊的方式記錄局部性的性別特徵,藉由每個區塊中的統計特徵分布並串接起來成為一個特徵向量,以代表男女性別,之後再使用SVM分類器進行性別辨識。 本文探討此架構在監控系統中之單一角度辨識率、穿著變化、攜帶背包與訓練樣本數量對辨識結果的影響,並實驗在單一角度訓練,測試其他角度的視角變化情形,實驗結果顯示,這個架構對性別辨識,是穩定且有效的。 Recently, gender classification is widely developed in many commercial systems in our daily life. For examples, the statistical data collection module for consumers’ genders and ages is embedded into an advertisement system. An intelligent surveillance algorithm analyzes the human gender and activities. Many gender classification algorithms proposed in literatures use the face, voice or gait features. However, face and voice features need a close contact with people. Human gaits are the valid feature for gender classification in a long distance. The main challenge for gait-based gender classification is the view angle from cameras due to the non-rigid body. In addition to view angles, clothing, shoes, and carrying conditions also reduce the performance. In this work, a gait energy image (GEI)-based algorithm is proposed for gender classification. The GEIs are constructed from the aligned gait silhouettes. The gait cycles are first estimated and the silhouettes of gaits are aligned from the input video sequences. After the pre-processing, gait cycle estimation, and silhouette alignment, GEIs are constructed and separated into several regions. Next, local texture features, local binary pattern (LBP) or local directional pattern (LDP), are extracted from the separated GEIs’ regions. A feature vector concatenating the LBP histograms of small regions are extracted. An SVM-based classifier is adopted for gender classification. In order to show the performance of the proposed method, several conditions, including various view angles, clothing variations, and carrying bags, are conducted in the experiments. Instead of the training and testing images both in a 90-degree angle, test images in various view angles are classified using the training images in a 90-degree angle. Experimental results are demonstrated and high recognition rates are achieved. |