博碩士論文 102523054 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:25 、訪客IP:3.22.61.73
姓名 黃信凱(Hsin-Kai Huang)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 深度摺積神經網路於混合式整體學習之影像檢索技術
(Mixture of Deep CNN-based Ensemble Model for Image Retrieval)
相關論文
★ 基於區域權重之衛星影像超解析技術★ 延伸曝光曲線線性特性之調適性高動態範圍影像融合演算法
★ 實現於RISC架構之H.264視訊編碼複雜度控制★ 基於卷積遞迴神經網路之構音異常評估技術
★ 具有元學習分類權重轉移網路生成遮罩於少樣本圖像分割技術★ 具有注意力機制之隱式表示於影像重建 三維人體模型
★ 使用對抗式圖形神經網路之物件偵測張榮★ 基於弱監督式學習可變形模型之三維人臉重建
★ 以非監督式表徵分離學習之邊緣運算裝置低延遲樂曲中人聲轉換架構★ 基於序列至序列模型之 FMCW雷達估計人體姿勢
★ 基於多層次注意力機制之單目相機語意場景補全技術★ 基於時序卷積網路之單FMCW雷達應用於非接觸式即時生命特徵監控
★ 視訊隨選網路上的視訊訊務描述與管理★ 基於線性預測編碼及音框基頻週期同步之高品質語音變換技術
★ 基於藉語音再取樣萃取共振峰變化之聲調調整技術★ 即時細緻可調性視訊在無線區域網路下之傳輸效率最佳化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著網路及科技的日新月異,數位相機、平板、智慧型手機等可攜式影音多媒體裝置的普及化,使得數位影像資料每天以爆炸式地增長,進入大數據(Big Data)時代。在面對大量且複雜的影像資料庫,如何有效管理並依照使用者的需求檢索所需影像,是目前影像檢索技術所面臨的重要課題。為了學習出最佳的影像特徵描述,以取得穩定且正確的影像檢索結果,因此將不同架構的摺積神經網路結合在一起,以提升特徵描述的學習效果。
本論文提出使用兩種不同架構的摺積神經網路(Convolutional Neural Networks, CNN)組成混合式整體學習 (Mixture of Ensemble Learning) 模型的方法。其將兩種深度學習網路(AlexNet 和 NIN)學習出的影像特徵描述,經加權平均運算後,取得更能夠代表影像的特徵描述,以便能迅速得到正確的檢索結果。由實驗結果顯示,CNN的整體學習架構確實能夠有效提升學習的效果,使影像分類的準確度高於單一摺積神經網路。而將整體學習出來的影像特徵,應用到影像檢索之中,在CIFAR-10及CIFAR-100影像資料庫的檢索平均準確率(mean average precision, MAP )達到0.867和0.526的表現。
摘要(英) Rapid Internet deployment and technology development have led us into the era of Big Data. There are numerous digital image data being continuously produced by our pads, smartphones, digital cameras, and other portable multimedia devices. We are facing many problems of challenge. One of the primary problems is how we can find an effective method to manage our image datasets and conduct customized retrieval. We propose a model, which combines two distinguishable deep Convolutional Neural Networks (CNN) architectures to achieve better performance for image retrieval.
This paper proposes an ensemble model based on a mixed architecture of deep CNN. It utilizes two kinds of deep learning networks, AlexNet and Network In Network (NIN), to obtain the image features, and to compute the weighted average feature vectors for image retrieval. From our experiment result, ensemble architecture could effectively enhance learning with higher accuracy than single CNN in image classification. The proposed Mixture of deep CNN-based Ensemble Model (MCNNE) was applied to CIFAR-10 and CIFAR-100 datasets. It achieved 0.867 and 0.526 Mean Average Precision (MAP) in image retrieval tasks, respectively.
關鍵字(中) ★ 影像檢索
★ 整體學習
★ 深度學習
★ 類神經網路
★ 摺積神經網路
關鍵字(英) ★ Content-based image retrieval
★ Ensemble learning
★ Deep learning
★ Neural networks
★ Convolutional neural networks
論文目次 摘要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 VII
表目錄 IX
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 2
1.3 論文架構 3
第二章 整體學習介紹 4
2.1 類神經網路 5
2.1.1 類神經網路發展史 6
2.1.2 倒傳遞神經網路 9
2.2 深度神經網路 14
2.2.1 摺積神經網路 - LeNet-5 15
2.2.2 摺積神經網路 - AlexNet 17
2.3 整體學習架構 20
2.3.1 整體學習介紹 20
2.3.2 整體學習的作用 22
第三章 影像檢索技術相關研究介紹 25
3.1 影像檢索技術簡介 25
3.2 影像低階特徵概述 27
3.2.1 色彩特徵 28
3.2.2 紋理特徵 29
3.2.3 形狀特徵 31
3.3 深度學習、整體學習特徵擷取之文獻 32
3.3.1 基於深度摺積神經網路之影像檢索技術 32
3.3.2 多欄式深度神經網路之影像分類技術 37
第四章 提出之影像檢索技術 40
4.1 前處理階段 40
4.1.1 全域對比度正規化 41
4.1.2  ZCA Whitening 41
4.1.3 資料擴增 44
4.2 訓練階段 45
4.2.1 摺積神經網路介紹 45
4.2.2 混合式整體學習架構 49
4.3 測試階段 51
4.3.1 餘弦距離 51
第五章 實驗結果與分析討論 53
5.1 實驗環境與資料庫 53
5.2 評分機制 - MAP 56
5.3 影像檢索系統效能 59
5.3.1 影像分類效能評估 59
5.3.2 影像檢索效能評估 64
第六章 結論與未來展望 71
參考文獻 72
參考文獻 [1] Y. Liu, H. Liu, B. Zhang and G. Wu, “Extraction of if-then rules from trained neural network and its application to earthquake prediction,” Proceedings of the Third IEEE International Conference on Cognitive Informatics, 2004.
[2] T. Kondo, J. Ueno, and S. Takao, “Medical image diagnosis of lung cancer by revised GMDH-type neural network self-selecting optimum neuron architectures,” System Integration (SII), IEEE/SICE International Symposium, 2011.
[3] N.L.D. Khoa, K. Sakakibara, and I. Nishikawa, “Stock Price Forecasting using Back Propagation Neural Networks with Time and Profit Based Adjusted Weight Factors,” SICE-ICASE, International Joint Conference, 2006.
[4] G.E. Hinton, and R.R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.
[5] http://www.ling.fju.edu.tw/hearing/brain-into.htm
[6] 蘇木春、張孝德 機器學習 : 類神經網路.模糊系統以及基因演算法則,修訂第二版,全華圖書出版社,2004,ISBN 9572147374
[7] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning representations by back-propagating errors,” Nature 323 (6088): 533–536, 8 October 1986.
[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998
[9] I. Mrazova, and M. Kukacka, “Hybrid convolutional neural networks,” Industrial Informatics INDIN 2008. 6th IEEE International Conference, 2008.
[10] D. Ciresan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 3642-3649, 2012.
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks”, Advances in neural information processing systems, pp. 1097-1105, 2012.
[12] M. Lin, Q. Chen, and S. Yan, “Network in network,” Computing Research Repository, 2013.
[13] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, 2015.
[14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1-9, 2015.
[15] T. G. Dietterich, “Ensemble methods in machine learning,” in Multiple classifier systems, Springer Berlin Heidelberg, pp.1-15, 2000.
[16] Y. Freund, R. Schapire, and N. Abe, “A short introduction to boosting,” Journal-Japanese Society For Artificial Intelligence, 1999.
[17] Y. Liu, D. Zhang, G. Lu, and W. Y. Ma, “A survey of content-based image retrieval with high-level semantics,” Pattern Recognition vol. 40, no.1, pp. 262-282, 2007.
[18] X.S. Zhou, and T.S. Huang, “CBIR: from low-level features to high-level semantics,” Proceedings of the SPIE, Image and Video Communication and Processing, San Jose, CA, vol. 3974, pp. 426–431, 2000.
[19] Y. Chen, J.Z. Wang, and R. Krovetz, “An unsupervised learning approach to content-based image retrieval,” IEEE Proceedings of the International Symposium on Signal Processing and its Applications, pp. 197–200, 2003.
[20] A.W. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain, “Content-based image retrieval at the end of the early years,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 22.12, pp. 1349-1380, 2000.
[21] M.J. Swain, and D.H. Ballard, “Color Indexing,” International Journal of Computer Vision, pp. 11-32, 1991.
[22] M.A. Stricker, and M. Orengo, “Similarity of color images,” IS&T/SPIE′s Symposium on Electronic Imaging: Science & Technology. International Society for Optics and Photonics, 1995.
[23] G. Pass, R. Zabih, and J. Miller, “Comparing images using color coherence vectors,” Proceedings of the fourth ACM international conference on Multimedia. ACM, 1997.
[24] M. Ferman, A.M. Tekalp, and R. Mehrotra, “Robust Color Histogram Descriptors for Video Segment Retrieval and Identification”, IEEE Transactions on Image Processing, vol. 11, no. 5, pp. 497-508, 2002.
[25] H. Tamura, S. Mori, T. Yamawaki, “Texture features corresponding to visual perception,” IEEE Transactions on Systems, Man and Cybernetics, vol. 8, no. 6, pp.460-473, 1978.
[26] T. Ojala, M. Pietikäinen, and D. Harwood, “Performance evaluation of texture measures with classification based on Kullback discrimination of distributions,” Proceedings of the 12th IAPR International Conference on Pattern Recognition, vol. 1, pp. 582 – 585, 1994.
[27] X. Wang, T.X. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,” 2009 IEEE 12th International Conference on Computer Vision, 2009.
[28] C.H. Kuo, Y.H. Chou, and P.C. Chang, “Using Deep Convolutional Neural Networks for Image Retrieval,” Visual Information Processing and Communication VI, February 2016
[29] R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan, “Supervised Hashing for Image Retrieval via Image Representation Learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2014.
[30] D. Cireşan, U. Meier, and J. Masci, “Multi-column deep neural network for traffic sign classification,” Neural Networks, 2012.
[31] A. Coates, A.Y. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” International conference on artificial intelligence and statistics, 2011.
[32] I.J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, “Maxout networks,” arXiv preprint arXiv:1302.4389, 2013.
[33] Y. Jia, et al., “Caffe: Convolutional architecture for fast feature embedding,” In Proceedings of the ACM International Conference on Multimedia, 2014.
[34] A. Torralba, R. Fergus, and W. Freeman, “80 million tiny images: A large data set for nonparametric object and scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 1958-1970, 2008.
指導教授 張寶基(Pao-Chi Chang) 審核日期 2016-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明