English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42651495      線上人數 : 1270
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93612


    題名: 使用 YOLOv4 網路與 LiDAR 相機之動態參考坐標系定位方法開發;A Positioning Method for Dynamic Reference Frame with YOLOv4 Network and LiDAR Camera
    作者: 李冠輝;Li, Kuan-Hui
    貢獻者: 機械工程學系
    關鍵詞: LiDAR 相機;YOLOv4;手術導航系統;點雲;物件偵測;LiDAR camera;YOLOv4;surgical navigation;point cloud;object tracking
    日期: 2023-01-12
    上傳時間: 2024-09-19 17:21:35 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,因為手術導航系統十分可靠的幫助醫師們更安全且精準的完成手術任務,該系統最近已經被廣泛的應用在各式各樣的臨床手術上。但是,手術導航系統仍因為其在醫療市場上昂貴的價格,無法更有效的普及於手術教學使用上。
    然而有鑑於三維光學量測技術如立體視覺、結構光與飛時測距等方法不斷的進步,消費級別的深度相機與光達(Light Detection And Ranging, LiDAR)相機等較便宜的儀器都能夠更有效率的取得目標的三維資訊,例如光達技術可以透過光束發射與接收的時間差,計算出相機與目標間的距離。也因為光達準確的量測精度與可以立即取得大範圍座標資訊的特性,其經常被使用在機器人與自動駕駛方面的科技應用。
    光達相機擁有比現今市面上主流的立體視覺深度相機更準確的量測精度,本研究嘗試以光達相機開發出一個基於YOLOv4網路追蹤動態參考坐標系的醫療手術定位方法。由YOLOv4在彩色影像進行物件偵測並結合對應的深度資訊,擬合出標記物的座標,最後依照其幾何關係判斷動態參考坐標系的位置與方向。本研究的誤差實驗在以光達相機為基準進行63個位置且每個位置各200筆的座標量測。實驗結果顯示標記球球心的擬合誤差在可在3mm以內,證明本研究所建構之系統具有可靠的精度與穩定性。
    ;Surgical navigation systems have been widely used in clinical medicine in recent years. This is because it shows high reliability in helping doctors accomplish their surgery more precisely and safer. However, surgical navigation systems on the market are usually expensive. The high price of the system makes it hard to be used on surgical training.
    Due to the improvement of 3D (three-dimensional) measurement technology such as stereo vision, structure light, and time of flight (ToF), cheaper devices like depth cameras and LiDAR (light detection and ranging) can also acquire depth information effectively. For example, LiDAR acquires the relative distance between the target and the camera by calculating the time difference between the emission and reception of the laser beam. Because of its high precision and rapid acquisition of large-area information advantages, LiDAR is widely used in the application of robotic and autonomous vehicle systems.
    Due to the measure accuracy of LiDAR camera is better than depth camera with stereo vision which is commonly used in modern systems. This research develops a positioning method for DRF(dynamic reference frame) based on the YOLOv4 (you only look once version 4) network and a LiDAR camera. The algorithm can detect the object on the RGB image and project the detect result to the relative depth map. Then we can fit the marker position by the coordinates. After fitting, all centers of markers can be determined and compared with the known dynamic reference frame structure to calculate the position and orientation of the DRF. The experiment of this research measured coordinates of 63 positions. The experimental results show that the errors are all lower than 3mm. This results prove that our proposed method has reliable accuracy and stability.
    顯示於類別:[機械工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML18檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明