博碩士論文 108523029 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:34 、訪客IP:3.142.12.240
姓名 侯建全(JIAN-CHIUAN HOU)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 基於深度強化學習之多相機陣列協作機制:以智慧家庭跌倒偵測為實施例
(Camera array collaboration based on deep reinforcement learning: A practical case of fall detection in smart home space)
相關論文
★ 非結構同儕網路上以特徵相似度為基準之搜尋方法★ 以階層式叢集聲譽為基礎之行動同儕網路拓撲架構
★ 線上RSS新聞資料流中主題性事件監測機制之設計與實作★ 耐延遲網路下具密度感知的路由方法
★ 整合P2P與UPnP內容分享服務之家用多媒體閘道器:設計與實作★ 家庭網路下簡易無縫式串流影音播放服務之設計與實作
★ 耐延遲網路下訊息傳遞時間分析與高效能路由演算法設計★ BitTorrent P2P 檔案系統下載端網路資源之可調式配置方法與效能實測
★ 耐延遲網路中利用訊息編碼重組條件之資料傳播機制★ 耐延遲網路中基於人類移動模式之路由機制
★ 車載網路中以資料匯集技術改善傳輸效能之封包傳送機制★ 適用於交叉路口環境之車輛叢集方法
★ 車載網路下結合路側單元輔助之訊息廣播機制★ 耐延遲網路下以靜態中繼節點(暫存盒)最佳化訊息傳遞效能之研究
★ 耐延遲網路下以動態叢集感知建構之訊息傳遞機制★ 跨裝置影音匯流平台之設計與實作
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著全球人口高齡化與醫療照護人力短缺等現狀,居家健康照護成為當前民生重要議題年長者或獨居者在家中活動,跌倒受傷是普遍存在的風險,尤其是年長者一但跌倒時,若未獲得及時的協助,恐將造成嚴重的傷害。近年來許多跌倒警示系統和穿戴式跌倒警示裝置陸續提出,其中基於相機光學輔助的跌倒事件偵測技術與應用引起廣泛的研究關注,然而,在居家生活環境中這類的跌倒偵測方法面臨不少限制,例如障礙物遮擋及相機視幅和視角等因素。
因此本篇論文提出一套基於深度強化學習之多相機協作跌倒偵測機制,透過多台相機裝置之間進行協作與判斷,來解決單一相機在跌倒事件偵測時所遇到的困難,並且利用深度強化學習的方式來針對多相機協作的動態群組進行學習,目的是為了提升多相機系統的準確性以及加快系統決策的時間,並且在本論文中我們透過實際建置實驗環境和實作出系統之雛形開發,並且以跌倒偵測來作為我們的實施例,之後在針對單相機決策、多相機決策(未使用動態群組)以及多相機決策(使用動態群組)三種方案來進行實際的效能比較。
摘要(英) With the aging of the global population and the shortage of medical care manpower, home health care has become an important part of people’s livelihood.
To issue. If the elderly or those living alone are active at home, falls and injuries are a common risk, especially for the elderly.
However, if you do not get timely assistance when you fall, you may cause serious injuries. In recent years, many fall warning systems and
Wearable fall warning devices have been proposed one after another. Among them, the fall event detection technology and application based on the optical assist of the camera are introduced.
From a wide range of research concerns. However, such fall detection methods face many limitations in the home living environment, such as
Factors such as occlusion by obstacles and the camera′s field of view and angle of view.

Therefore, this paper proposes a multi-camera collaborative fall detection mechanism based on deep reinforcement learning.
Image recognition collaboration and judgment between multiple cameras to solve the difficulties encountered by a single camera in the fall event detection
Difficult, and use deep reinforcement learning to learn for dynamic groups of multi-camera collaboration, the purpose
It is to improve the accuracy of the multi-camera system and speed up the decision-making time of the system, and through the actual construction
Detection environment for single-camera decision-making, multi-camera decision-making (not using dynamic groups), and multi-camera decision-making
(Using dynamic groups) Three schemes for actual performance comparison.
關鍵字(中) ★ 物聯網
★ 強化學習
關鍵字(英)
論文目次 摘要 i
Abstract ii
圖目錄 v
表目錄 vii
1 簡介 1
1.1 前言 1
1.2 研究目的 3
2 研究背景及文獻探討 5
2.1 物聯網邊緣計算結合深度學習 5
2.1.1 物聯網 5
2.1.2 深度學習 7
2.2 強化學習 11
2.2.1 背景 11
2.2.2 強化學習 12
2.2.3 強化學習分類 13
2.3 跌倒偵測 15
2.3.1 非計算機視覺 15
2.3.2 計算機視覺 16
2.4 多相機協作 17
2.4.1 單相機決策 17
2.4.2 多相機決策 17
3 研究方法 19
3.1 系統架構 19
3.2 跌倒偵測 21
3.2.1 人體姿態檢測 21
3.2.2 跌倒偵測 23
3.3 群組組成 27
3.3.1 強化學習 27
3.3.2 Q-Learning 30
3.3.3 Deep Q-Learning 32
3.4 多相機協作 34
4 實驗結果 36
4.1 實驗環境 36
4.2 實驗設計 40
4.2.1 環境輸入資料蒐集 40
4.2.2 強化學習設計 41
4.2.3 實驗參數設計 44
4.3 實驗結果 46
4.3.1 跌倒檢測 46
4.3.2 群組組成 48
4.3.3 不同解決方案準確度 52
5結論與未來研究 55
參考文獻 56
參考文獻 [1] Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.
[2] S. C. Agrawal, R. K. Tripathi, and A. S. Jalal, “Human-fall detection from an indoor video surveillance,” in 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2017, pp. 1–5.
[3] N. Lu, Y. Wu, L. Feng, and J. Song, “Deep learning for fall detection: Threedimensional cnn combined with lstm on video kinematic data,” IEEE Journal of Biomedical and Health InforCVPR2017matics, vol. 23, no. 1, pp. 314–323, Jan 2019. [4] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, Oct 2016.
[5] M. of Health and T. Welfare. (2019) Statistics of accidents by the health promotionadministration. [Online]. Available: http://web.archive.org/web/20080207010024/ http://www.808multimedia.com/winnt/kernel.htm
[6] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, “Hand keypoint detection in single images using multiview bootstrapping,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017, p. 4645–4653.
[7] J. Lin, W. Yu, N. Zhang, X. Yang, H. Zhang, and W. Zhao, “A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications,” IEEE Internet of Things Journal, vol. 4, no. 5, pp. 1125–1142, Oct 2017.
[8] D. Svozil, V. Kvasnicka, and J. Pospichal, “Introduction to multi-layer feed-forward neural networks,” in Chemometrics Intell. Lab. Syst, vol. 39, no. 1, 1997, pp. 43–62. [9] C. G. C. Index. (2018, Jun) Forecast and methodology, 2016–2021, white paper. [Online]. Available: https://www.cisco.com/c/en/us/solutions/collateral/ serviceprovider/global-cloud-index-gci/white-paper-c11-738085.html
[10] B. Heintz, A. Chandra, and R. K. Sitaraman, “Optimizing grouped aggregation in geo-distributed streaming analytics,” in 24th International Symposium on HighPerformance Parallel and Distributed Computing, 2015, pp. 133–144.
[11] L.-J. Lin, “Reinforcement learning for robots using neural networks,” Ph.D. dissertation, CMU-CS-93-103, Carnegie Mellon University, Schenley Park Pittsburgh, PA, United States, https:// ezproxy.lib.ncu.edu.tw/ login? url=https:// www.proquest.com/ dissertations-theses/ reinforcement-learning-robots-usingneural/docview/303995826/se-2?accountid=12690, 1993.
[12] L. Deng and D. Yu, “Deep learning: Methods and applications,” Foundations and Trends in Signal Processing, vol. 7, no. 3–4, pp. 197–387, 2014. [Online]. Available: http://dx.doi.org/10.1561/2000000039
[13] T. de Quadros, A. E. Lazzaretti, and F. K. Schneider, “A movement decomposition and machine learning-based fall detection system using wrist wearable device,” IEEE Sensors Journal, vol. 18, no. 12, pp. 5082–5089, June 2018.
[14] H. Li, A. Shrestha, H. Heidari, J. Le Kernec, and F. Fioranelli, “Bi-lstm network for multimodal continuous human activity recognition and fall detection,” IEEE Sensors Journal, vol. 20, no. 3, pp. 1191–1201, Feb 2020.
[15] L. Montanini, A. Del Campo, D. Perla, S. Spinsante, and E. Gambi, “A footwearbased methodology for fall detection,” IEEE Sensors Journal, vol. 18, no. 3, pp. 1233–1242, Feb 2018.
[16] J. Clemente, F. Li, M. Valero, and W. Song, “Smart seismic sensing for indoor fall detection, location, and notification,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 2, pp. 524–532, Feb 2020.
[17] F. Muheidat, L. Tawalbeh, and H. Tyrer, “Context-aware, accurate, and real time fall detection system for elderly people,” in 2018 IEEE 12th International Conference on Semantic Computing (ICSC), Jan 2018, pp. 329–333.
[18] S. M. Adnan, A. Irtaza, S. Aziz, M. O. Ullah, A. Javed, and M. T. Mahmood, “Fall detection through acoustic local ternary patterns,” Applied Acoustics, vol. 140, pp. 296–300, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/ pii/S0003682X18302834
[19] J.-L. Chua, Y. C. Chang, and W. K. Lim, “Visual based fall detection through human shape variation and head detection,” in IMPACT-2013, Nov 2013, pp. 61–65.
[20] X. Kong, Z. Meng, N. Nojiri, Y. Iwahori, L. Meng, and H. Tomiyama, “A hog-svm based fall detection iot system for elderly persons using deep sensor,” Procedia Computer Science, vol. 147, pp. 276–282, 2019, 2018 International Conference on Identification, Information and Knowledge in the Internet of Things. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S187705091930287X [21] V. D. Nguyen, M. T. Le, A. D. Do, H. H. Duong, T. D. Thai, and D. H. Tran, “An efficient camera-based surveillance for fall detection of elderly people,” in 2014 9th IEEE Conference on Industrial Electronics and Applications, June 2014, pp. 994–997. [22] X. Wang and K. Jia, “Human fall detection algorithm based on yolov3,” in 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC), July 2020, pp. 50–54.
[23] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “Openpose: Realtime multiperson 2d pose estimation using part affinity fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 172–186, Jan 2021.
[24] C. Zhang and Y. Tian, “Rgb-d camera-based daily living activity recognition,” in Journal of computer vision and image processing, vol. 2, no. 4, 2012, p. 12.
[25] Z. Huang, Y. Liu, Y. Fang, and B. K. P. Horn, “Video-based fall detection for seniors with human pose estimation,” in 2018 4th International Conference on Universal Village (UV), Oct 2018, pp. 1–4.
[26] C. Zhao, B. Fan, J. Hu, L. Tian, Z. Zhang, S. Li, and Q. Pan, “Pose estimation for multi-camera systems,” in 2017 IEEE International Conference on Unmanned Systems (ICUS), Oct 2017, pp. 533–538.
[27] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, “Hand keypoint detection in single images using multiview bootstrapping,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 4645–4653.
[28] D. Rodriguez-Criado, P. Bachiller, P. Bustos, G. Vogiatzis, and L. J. Manso, “Multicamera torso pose estimation using graph neural networks,” in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Aug 2020, pp. 827–832.
[29] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy, “Towards accurate multi-person pose estimation in the wild,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 3711–3719.
[30] V.Mnih, K.Kavukcuoglu, D.Silver, A.A.Rusu, J.Veness, M.G.Bellemare, A.Graves, M.Riedmiller, A.KFidjeland, G.Ostrovski, S.Petersen, C.Beattie, A.Sadik, I. H. D. D. S.Legg, and D.Hassabis, “Human-level control through deep reinforcement learning,” in .Nature, Feb 2015, pp. 529–533.
[31] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller, “Playing atari with deep reinforcement learning,” CoRR, vol. abs/1312.5602, 2013. [Online]. Available: http://arxiv.org/abs/1312.5602
[32] L. Martínez-Villaseñor, H. Ponce, J. Brieva, E. Moya-Albor, J. Núñez-Martínez, and C. Peñafort-Asturiano, “Up-fall detection dataset: A multimodal approach,” Sensors, vol. 19, no. 9, 2019. [Online]. Available: https://www.mdpi.com/ 1424-8220/19/9/1988
指導教授 胡誌麟(Chih-Lin Hu) 審核日期 2021-8-31
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明