參考文獻 |
[1] 衛生福利部. “兒少死亡統計,” [Online]. Available: https://crc.sfaa.gov.tw/Statistics/
Detail/11 (visited on 05/30/2023).
[2] 衛生所. “幼兒事故傷害居家環境安全,” [Online]. Available: https://www.eastphc.
taichung.gov.tw/media/454868/7103010285771.pdf (visited on 05/30/2023).
[3] M. Jutila, H. Rivas, P. Karhula, and S. Pantsar-Syväniemi, “Implementation of a wearable
sensor vest for the safety and well-being of children,” Procedia Computer Science,
vol. 32, pp. 888–893, 2014.
[4] Y. Nam and J. W. Park, “Child activity recognition based on cooperative fusion model of
a triaxial accelerometer and a barometric pressure sensor,” IEEE Journal of Biomedical
and Health Informatics, vol. 17, no. 2, pp. 420–426, 2013.
[5] A. Jatti, M. Kannan, R. Alisha, P. Vijayalakshmi, and S. Sinha, “Design and development
of an iot based wearable device for the safety and security of women and girl children,”
in 2016 IEEE International Conference on Recent Trends in Electronics, Information &
Communication Technology (RTEICT), IEEE, 2016, pp. 1108–1112.
[6] H. Na, S. F. Qin, and D. Wright, “Young children's fall prevention based on computer
vision recognition,” in Proceedings of the 6th WSEAS international conference on
Robotics, control and manufacturing technology, 2006, pp. 193–198.
[7] Q. Nie, X. Wang, J. Wang, M. Wang, and Y. Liu, “A child caring robot for the dangerous
behavior detection based on the object recognition and human action recognition,”
in 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE,
2018, pp. 1921–1926.
[8] T. Nose, K. Kitamura, M. Oono, Y. Nishida, and M. Ohkura, “Data-driven child behavior
prediction system based on posture database for fall accident prevention in a daily living
space,” Journal of Ambient Intelligence and Humanized Computing, vol. 11, pp. 5845–
5855, 2020.
[9] D. Osokin, “Real-time 2d multi-person pose estimation on cpu: Lightweight openpose,”
arXiv preprint arXiv:1811.12004, 2018.
[10] F. Zhang, L. Cui, and H. Wang, “Research on children’s fall detection by characteristic
operator,” in Proceedings of the International Conference on Advances in Image Processing,
ser. ICAIP ’17, Bangkok, Thailand: Association for Computing Machinery, 2017.
[11] D. A. Reynolds et al., “Gaussian mixture models.,” Encyclopedia of biometrics, vol. 741,
no. 659-663, 2009.
[12] T. Joachims, “Making large-scale svm learning practical,” Technical report, Tech. Rep.,
1998.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional
neural networks,” in Advances in neural information processing systems, 2012,
pp. 1097–1105.
[14] J. Donahue, L. Anne Hendricks, S. Guadarrama, et al., “Long-term recurrent convolutional
networks for visual recognition and description,” in Proceedings of the IEEE
conference on computer vision and pattern recognition, 2015, pp. 2625–2634.
[15] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation,
vol. 9, no. 8, pp. 1735–1780, 1997.
[16] X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional
lstm network: A machine learning approach for precipitation nowcasting,” Advances in
neural information processing systems, vol. 28, 2015.
[17] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal
features with 3d convolutional networks,” in Proceedings of the IEEE international
conference on computer vision, 2015, pp. 4489–4497.
[18] C. Feichtenhofer, “X3d: Expanding architectures for efficient video recognition,” in Proceedings
of the IEEE/CVF conference on computer vision and pattern recognition, 2020,
pp. 203–213.
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016,
pp. 770–778.
[20] W. Kay, J. Carreira, K. Simonyan, et al., “The kinetics human action video dataset,”
arXiv preprint arXiv:1705.06950, 2017.
[21] N. Lu, Y. Wu, L. Feng, and J. Song, “Deep learning for fall detection: Three-dimensional
cnn combined with lstm on video kinematic data,” IEEE journal of biomedical and health
informatics, vol. 23, no. 1, pp. 314–323, 2018.
[22] N. A. Saidin and S. A. Shukor, “An analysis of kinect-based human fall detection system,”
in 2020 IEEE 8th Conference on Systems, Process and Control (ICSPC), IEEE,
2020, pp. 220–224.
[23] I. Kareem, S. F. Ali, and A. Sheharyar, “Using skeleton based optimized residual neural
network architecture of deep learning for human fall detection,” in 2020 IEEE 23rd
International Multitopic Conference (INMIC), IEEE, 2020, pp. 1–5.
[24] S. Suzuki, Y. Amemiya, and M. Sato, “Enhancement of child gross-motor action recognition
by motional time-series images conversion,” in 2020 IEEE/SICE International
Symposium on System Integration (SII), IEEE, 2020, pp. 225–230.
[25] S. Suzuki, Y. Amemiya, and M. Sato, “Enhancement of gross-motor action recognition
for children by cnn with openpose,” in IECON 2019-45th Annual Conference of the IEEE
Industrial Electronics Society, IEEE, vol. 1, 2019, pp. 5382–5387.
[26] Y. Zhang, Y. Tian, P. Wu, and D. Chen, “Application of skeleton data and long shortterm
memory in action recognition of children with autism spectrum disorder,” Sensors,
vol. 21, no. 2, p. 411, 2021.
[27] R.-D. Vatavu, “The dissimilarity-consensus approach to agreement analysis in gesture
elicitation studies,” in Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, 2019, pp. 1–13.
[28] S. Rajagopalan, A. Dhall, and R. Goecke, “Self-stimulatory behaviours in the wild for
autism diagnosis,” in Proceedings of the IEEE International Conference on Computer
Vision Workshops, 2013, pp. 755–761.
[29] A. Aloba, G. Flores, J. Woodward, et al., “Kinder-gator: The uf kinect database of child
and adult motion.,” in Eurographics (Short Papers), 2018, pp. 13–16.
[30] S. Mohottala, S. Abeygunawardana, P. Samarasinghe, D. Kasthurirathna, and C. Abhayaratne,
“2d pose estimation based child action recognition,” in TENCON 2022-2022
IEEE Region 10 Conference (TENCON), IEEE, 2022, pp. 1–7.
[31] J. Carreira, E. Noland, A. Banki-Horvath, C. Hillier, and A. Zisserman, “A short note
about kinetics-600,” arXiv preprint arXiv:1808.01340, 2018.
[32] A. Turarova, A. Zhanatkyzy, Z. Telisheva, A. Sabyrov, and A. Sandygulova, “Child action
recognition in rgb and rgb-d data,” in Companion of the 2020 ACM/IEEE International
Conference on Human-Robot Interaction, 2020, pp. 491–492.
[33] G. Sciortino, G. M. Farinella, S. Battiato, M. Leo, and C. Distante, “On the estimation of
children's poses,” in Image Analysis and Processing-ICIAP 2017: 19th International
Conference, Catania, Italy, September 11-15, 2017, Proceedings, Part II 19, Springer,
2017, pp. 410–421.
[34] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, “2d human pose estimation: New
benchmark and state of the art analysis,” in IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Jun. 2014.
[35] L. Sigal, A. O. Balan, and M. J. Black, “Humaneva: Synchronized video and motion
capture dataset and baseline algorithm for evaluation of articulated human motion,” International
journal of computer vision, vol. 87, no. 1-2, p. 4, 2010.
[36] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, “Human3. 6m: Large scale datasets
and predictive methods for 3d human sensing in natural environments,” IEEE transactions
on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1325–1339, 2013.
[37] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale
hierarchical image database,” in 2009 IEEE conference on computer vision and pattern
recognition, Ieee, 2009, pp. 248–255.
[38] C. Lugaresi, J. Tang, H. Nash, et al., “Mediapipe: A framework for building perception
pipelines,” arXiv preprint arXiv:1906.08172, 2019.
48 |