中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/84106
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42729521      Online Users : 1222
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/84106


    Title: 使用虛擬合成資料實現臺灣手語特徵擷取暨手型辨識;Hand Feature Extraction and Gesture Recognition for Taiwan Sign Language by Using Synthetic Datasets
    Authors: 陳宥榕;Chen, Yu-Jung
    Contributors: 資訊工程學系
    Keywords: 虛擬合成資料;臺灣手語;特徵擷取;手 型 辨識;synthetic datasets;Taiwanese sign language;feature extraction;gesture recognition
    Date: 2020-08-10
    Issue Date: 2020-09-02 18:05:09 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 本研究針對臺灣手語視訊進行手部特徵擷取暨手型辨識。首先,我們以 Unity3D 建立訓練資料集,利用 3D 手部模型合成於自然場景、人物場景及純色背景中,快速地且大量地產生高品質訓練資料,其中包含手部影像、手部輪廓、手部關節點。透過合成資料的使用,可以減少人工標記所可能產生的負擔與誤差。我們討論如何讓人工合成影像更貼近實際影像,藉由調整背景複雜度、膚色多樣性及加入動態模糊等方式產生多樣化影像以增加模型可靠度。接著,我們比較利用ResNeSt+Detectron2模型產生的邊界框(bounding box)和語義分割(semantic segmentation)、以及改良EfficientDet模型所產生之熱圖(heatmap)的完整性後,最終我們使用邊界框作為手型辨識的特徵擷取,利用邊界框切出手語視訊中的手部影像進行手型辨識。我們同樣以 Unity3D 建立訓練資料集,利用 3D 手部模型製作數個臺灣手語基本手型,再利用ResNeSt進行分類辨識。實驗結果顯示本研究所產生的大量且高品質虛擬合成資料能有效的應用於手部特徵擷取,及臺灣手語之手型辨識。;Hearing-impaired people rely on sign languages to communicate with each other but may have problems interacting with the persons who may not understand sign languages. Since sign languages belong to a type of visual languages, computer vision approaches to recognizing sign languages are usually considered feasible to bridge the gap. However, recognition of sign languages is a complex task, which requires classifying hand shapes, hand motions and facial expressions. The detection and classification of hand gestures should be the first step because hands are the most important elements. This research thus focuses on hand feature extraction and gesture recognition for Taiwan Sign Language (TSL) videos.
    First, we established a synthetic dataset by using Unity3D. The advantage of using synthetic data is to reduce the effort of manual labeling and to avoid possible errors. A large dataset with high quality labeling can thus be achieved. The dataset is generated by changing hand shapes, colors and orientations. The background images are also changed to increase the robustness of the model. Motion blurriness is also added to make the synthetic data look closers to real cases. We compare three feature extractions: bounding boxes, semantic segmentation generated by the ResNeSt+Detectron2 and the heatmap generated by the EfficientDet. The bounding boxes are selected for the subsequent gesture recognition. We also employ Unity3D to create several basic sign gestures for TSL, and then use ResNeSt for classification and recognition.
    Experimental results demonstrate that the synthetic dataset can effectively help to train the suitable models for hand feature extraction and gesture recognition in TSL videos.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML204View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明