中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93052
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42713388      Online Users : 1366
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/93052


    Title: 基於3D全身人體追蹤及虛擬試衣之手語展示系統;Sign Language Display System Based on 3D Body Tracking and Virtual Try-on
    Authors: 李元熙;LI-YUAN-SI
    Contributors: 資訊工程學系
    Keywords: 虛擬試衣;人體建模;手語
    Date: 2023-07-12
    Issue Date: 2024-09-19 16:39:55 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 手語是一種視覺交流形式,它依靠手勢、面部表情和肢體語言的組合來傳達意義。 全世界數以百萬計的失聰或聽障人士以及與他們交流的人每天都在使用它。 然而,儘管它很重要,但由於手語的複雜性和可變性,手語識別和翻譯仍然是一項具有挑戰性的任務。

    近年來,計算機視覺和技術越來越多地應用於手語識別和翻譯,並取得了好的成果。 在這項工作中,我們介紹了一種基於三維身體建模和虛擬試衣的手語顯示系統。 我們的方法涉及使用身體網格估計來生成手語者的 3D 人體模型,然後將其用作多服裝網絡的輸入以模擬手語者衣服的外觀。

    我們收集了包含 100 個手語影片的資料集,每個影片都有不同的手語者表演一系列手語。 為了使用這些影片,我們首先使用 YOLOv5 裁剪出手語者以創建更好的環境來進行人體網格估計。 並使用旨在提高手腕旋轉精度的身體網格估計算法從每個影片中提取手語者的身體模型,然後應用虛擬試穿的方法在手語者身上模擬不同類型的服裝。 之後,我們得到了一個姿勢和形狀與原始手語者相同的虛擬人物模型,其衣服是從衣裝資料集中選擇的。 我們將這些模型一幀一幀地組合起來,生成一個影片,該影片顯示了一個虛擬人體模型穿著虛擬服裝演示手語。;Sign language is a form of visual communication that relies on a combination of hand gestures, facial expressions, and body language to convey meaning. Millions of individuals worldwide who are deaf or hard of hearing, as well as by those who communicate with them utilize it on a daily basis. However, despite its importance, sign language recognition and translation remains a challenging task since the complexity and variability of sign language.

    In recent years, computer vision and technique has been increasingly applied to sign language recognition and translation, with promising results. In this work, we introduce a sign language display system, based on three-dimensional body modeling[1] and virtual try-on[2]. Our approach involves using body mesh estimation to generate a 3D human model of the signer, which is then used as input to a multi-garment network[2] to simulate the appearance of clothing on the signer.

    We collected a dataset of 100 sign language videos, each featuring a different signer performing a range of signs. To use these videos, we firstly use YOLOv5[17] to crop out the signer to create a better environment to do human mesh estimation. And used body mesh estimation algorithms which aims to improve the accuracy of wrist rotation to extract the signer′s body model from each video, and then applied a virtual try-on method to simulate different types of clothing on the signer. Afterwards, we got a virtual human model whose pose and shape is same as the original signer, and its clothes is select from a cloth dataset. We combined these model frame by frame to generate a video which shows a virtual human model with virtual clothes acting sign language.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML23View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明