中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95713
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42712757      線上人數 : 1390
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95713


    題名: 基於深度學習之動態手勢辨識應用於非接觸式點餐系統;Application of Deep Learning Based Dynamic Gesture Recognition to Contactless Ordering System
    作者: 謝友倫;Hsieh, Yu-Lun
    貢獻者: 電機工程學系
    關鍵詞: 深度學習;影像辨識;動態手勢;Real-Time;非接觸式;Deep Learning;Image Recognition;Dynamic Gestures;Real-Time;Contactless
    日期: 2024-07-25
    上傳時間: 2024-10-09 17:11:29 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,全球經歷 COVID-19 疫情的影響,人們的生活方式和互動模式發生了根本性的轉變,為確保公共衛生安全,非接觸式技術的需求急劇增加。手勢識別作為一種直觀的人機互動方式,變得更為重要,若能經由深度學習影像辨識技術,搭配隨手可得的網路攝影機,並對動態手勢達到一定程度的辨識率與效率,用於非接觸式互動,即能達到減少接觸,且有效降低病毒傳播的風險。

    本研究使用 RGB 網路攝影機,結合 MediaPipe Hands 進行手部偵測與靜態手勢辨識功能,以及 Decouple + Recouple 深度學習網路用以學習所定義的 27 種動態手勢,並將人體動作辨識資料集應用於模型預訓練,再由調整資料集數據量與模型不同配置之設定下,比較其各別的差異性,最終建立一套自助式點餐介面,將其整合應用於Real-Time 場景中,以模擬實際點餐的操作流程,達到非接觸式且能自定義各別手勢動作所對應的控制功能。

    在手部偵測方面,能達到平均 99 % 的偵測信心度,而在動態手勢辨識方面,達到整體平均高於 95 % 的辨識準確率,以及單一各別手勢平均 95 % 的 F1-Score,並能在極小型的自製資料集上達到整體平均高於 93 % 的準確率,最終在 Real-Time 辨識方面,能達到平均單次執行時間約 0.4 秒,與 0.27 秒的手勢預測時間,以及正確辨識率為 94.07 %,具有高穩定性與準確率,以及優良的辨識速度,有利動態手勢辨識於非接觸式應用的實用性與發展。
    ;In recent years, the COVID-19 pandemic has fundamentally transformed people’s lifestyles and interaction modes worldwide. To ensure public health safety, the demand for contactless technologies has surged. Gesture recognition, as an intuitive form of human-computer interaction, has become increasingly important. By utilizing deep learning-based image recognition technology, combined with readily available web cameras, and achieving a certain level of accuracy and efficiency in recognizing dynamic gestures, it can be applied to contactless interactions, thereby reducing contact and effectively lowering the risk of virus transmission.

    This study uses RGB web cameras in combination with MediaPipe Hands for hand detection and static gesture recognition, and employs a Decouple + Recouple deep learning network to learn 27 defined dynamic gestures. Human action recognition datasets are used for model pre-training. By adjusting the dataset size and different model configurations, we compare their respective differences. Finally, we develop a self-service ordering interface and integrate it into a Real-Time scenario to simulate the actual ordering process, achieving a contactless system with customizable control functions corresponding to each gesture.

    For hand detection, we achieve an average detection confidence of 99 %. In terms of dynamic gesture recognition, we attain an overall average recognition accuracy exceeding 95 %, and an average F1-Score of 95 % for individual gestures. On a very small custom dataset, we achieve overall average accuracy exceeding 93 %. In Real-Time recognition, the average execution time per operation is approximately 0.4 seconds, with a gesture prediction time of 0.27 seconds. The correct recognition rate stands at 94.07 %, showcasing the system’s high stability, accuracy, and excellent recognition speed, making dynamic gesture recognition highly practical and beneficial for contactless applications.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML42檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明