博碩士論文 109523064 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:19 、訪客IP:18.220.126.5
姓名 詹豐鎧(Feng-Kai Jan)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 以對比學習輔助之影像去模糊
(Contrastive Learning Aided Image Deblurring)
相關論文
★ 應用於車內視訊之光線適應性視訊壓縮編碼器設計★ 以粒子濾波法為基礎之改良式頭部追蹤系統
★ 應用於空間與CGS可調性視訊編碼器之快速模式決策演算法★ 應用於人臉表情辨識之強健式主動外觀模型搜尋演算法
★ 結合Epipolar Geometry為基礎之視角間預測與快速畫面間預測方向決策之多視角視訊編碼★ 基於改良式可信度傳遞於同質區域之立體視覺匹配演算法
★ 以階層式Boosting演算法為基礎之棒球軌跡辨識★ 多視角視訊編碼之快速參考畫面方向決策
★ 以線上統計為基礎應用於CGS可調式編碼器之快速模式決策★ 適用於唇形辨識之改良式主動形狀模型匹配演算法
★ 以運動補償模型為基礎之移動式平台物件追蹤★ 基於匹配代價之非對稱式立體匹配遮蔽偵測
★ 以動量為基礎之快速多視角視訊編碼模式決策★ 應用於地點影像辨識之快速局部L-SVMs群體分類器
★ 以高品質合成視角為導向之快速深度視訊編碼模式決策★ 以運動補償模型為基礎之移動式相機多物件追蹤
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-8-1以後開放)
摘要(中) 近年來,基於深度學習(deep learning)之圖像處理任務是百花齊放,其中,影像品質的好壞,會影響電腦視覺應用的效能,而有效且快速的影像去模糊方案,除了能改善影像觀賞品質,還能結合其他影像處理任務之模型或應用於邊緣裝置。目前的影像去模糊模型,在訓練網路時,幾乎只採用去模糊影像與清晰影像的資訊計算損失,以更新網路參數,然而,影像去模糊是一項不適定性(ill-posed)的任務,故如何妥善利用模糊的資訊,以減少網路輸出的解空間,為一項提升輸出影像品質的關鍵。因此,本論文提出以對比學習(contrastive learning)輔助之影像去模糊方案,首先,利用清晰影像與去模糊影像的資訊做為正樣本以及錨點的資訊,並為了改善以往使用對比學習輔助影像回復的模型,在不更動樣本的情形下,可能讓對比學習無法繼續訓練的問題,將模糊影像透過所提之漸進式負樣本生成器產生負樣本,根據訓練的時期(epoch)區間,逐步提升負樣本的清晰度以增加其多樣性,最後使用所提之對比損失,輔助去模糊網路的參數更新。實驗結果顯示,本論文所提方案在GoPro資料集的表現上,能在相同的計算複雜度以及網路參數下,相較於MIMO-UNet提升峰值訊噪比(peak signal-to-noise ratio, PSNR) 0.12 dB。
摘要(英) Deep learning based image processing tasks are flourishing in recent years. Since the quality of an image affects the performance of computer vision applications. An effective and fast image deblurring scheme not only can improve the viewing experience but also can be combined with models from other image processing tasks on edge devices. The existing image deblurring models almost only uses the information of the deblurred image and the sharp image to calculate the loss to update the network at the training stage. In particular, image deblurring is an ill-posed task, so the proper usage of the blurred information to reduce the solution space of the network is key to improving the quality of the output image. Thus, this thesis proposes a contrastive learning aided image deblurring scheme. Firstly, clear images and deblurred images are taken as positive samples and anchor, respectively. It aims to improve the previous contrastive learning aided image restoration models that cannot continue to be trained without changing the samples. Secondly, the proposed scheme generates the negative samples by the proposed progressive negative samples generator. It gradually improves the sharpness of negative samples to increase their diversity over epochs at the training stage. Thirdly, the proposed contrastive loss is used to assist in the parameter update of the deblurring network. Experimental results on GoPro dataset show that the proposed scheme can improve the peak signal-to-noise ratio (PSNR) of MIMO-UNet by 0.12 dB for the same computational complexity and network parameters.
關鍵字(中) ★ 影像去模糊
★ 對比學習
★ 對比損失函數
★ 漸進式負樣本生成器
★ 推論時間
關鍵字(英) ★ image deblurring
★ contrastive learning
★ contrastive loss
★ progressive negative samples generator
★ inference time
論文目次 摘要 I
Abstract II
誌謝 IV
目錄 V
圖目錄 VII
表目錄 IX
第一章 緒論 1
1.1 前言 1
1.2 研究動機 1
1.3 研究方法 3
1.4 論文架構 4
第二章 基於多尺度網路之影像去模糊技術現況 5
2.1 非基於特徵融合之影像去模糊技術現況 5
2.2基於特徵融合之影像去模糊技術現況 8
2.3 總結 10
第三章 以對比學習輔助影像復原之方案介紹 11
3.1 對比學習技術簡介 11
3.2 以對比學習輔助影像回復之方案現況介紹 13
3.3 總結 15
第四章 本論文所提之以對比學習輔助之影像去模糊方案 16
4.1 系統架構 16
4.2 本論文提出之影像去模糊方案 17
4.2.1以嵌入式空間輔助之對比學習 17
4.2.2本論文所提之漸進式負樣本生成器 19
4.2.3本論文所採用之損失函數 21
4.3 訓練階段 22
4.3.1訓練階段之實驗設定 22
4.3.2訓練流程 23
4.3.3訓練資料集 23
4.4 總結 24
第五章 實驗結果與分析 25
5.1 實驗環境 25
5.2 去模糊實驗結果與分析 26
5.2.1測試資料集介紹 26
5.2.2客觀品質評估與網路分析 26
5.2.3測試資料集之主觀視覺比較 31
5.3 總結 37
第六章 結論與未來展望 38
參考文獻 39
符號表 42
參考文獻 [1] J. Pan, D. Sun, H. Pfister, and M.H. Yan, “Blind image deblurring using dark channel prior,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 1628–1636, June 2016.
[2] J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a convolutional neural network for non-uniform motion blur removal,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 769–777, June 2015.
[3] S. Nah, T. H. Kim, and K. M. Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 3883–3891, July 2017.
[4] X. Tao, H. Gao, X. Shen, J. Wang, and J. Jia, “Scale-recurrent network for deep image deblurring,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 8174-8182, Dec. 2018.
[5] H. Gao, X. Tao, X. Shen, and J. Jia, “Dynamic scene deblurring with parameter selective sharing and nested skip connections,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 3848-3856, June 2019.
[6] S.J. Cho, S.W. Ji, J.P. Hong, S.W. Jung, and S.J. Ko, “Rethinking coarse-to-fine approach in single image deblurring” in Proc. IEEE conference on International Conference on Computer Vision, pp. 4641-4650, Oct. 2021.
[7] X. Mao, Y. Liu, W. Shen, Q. Li, and Yan Wang, “Deep Residual Fourier Transformation for Single Image Deblurring,” arXiv preprint arXiv:2111.11745, Nov. 2021.
[8] H. Wu, Y. Qu, S. Lin, J. Zhou, R. Qiao, Z. Zhang, Y. Xie, and L. Ma, “Contrastive learning for compact single image dehazing,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 10551-10560, June 2021.
[9] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv: 1409.1556, Sep. 2014.
[10] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proc. European Conference on Computer Vision, pp. 694-711, Springer, 2016.
[11] Y. Wang, S. Lin 1, Y. Qu, H. Wu, Z. Zhang, Y. Xie, and A. Yao, “Towards compact single image super-resolution via contrastive self-distillation,” in Proc. 13th Int. Joint Conf. Artif. Intell, pp. 1122-1128, Aug. 2021.
[12] K. He, X. Zhang, and S. Ren, “Deep residual learning for image recognition,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 770-778, June 2016.
[13] S. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M. H. Yang, and S. Shao, “Multi-stage progressive image restoration,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 14821–14831, June 2021.
[14] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in Proc. IEEE conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144, July 2017.
[15] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction by learning an invariant mapping,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pp. 1735–1742, June 2006.
[16] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 815–823, June 2015.
[17] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International Conference on Machine Learning, pp. 1597-1607, July 2020.
[18] X. Yang, X. Wang, N. Wang, and X. Gao, “SRDN: A unified super-resolution and motion deblurring network for space image restoration,” in Proc. IEEE Transactions on Geoscience and Remote Sensing, Nov. 2021.
[19] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 248-255, Aug. 2009.
[20] J. Yu, L. Yang, N. Xu, J. Yang, and T. Huang, “Slimmable Neural Networks,” in International Conference on Learning Representations, May 2019.
[21] O. Kupyn, V. Budzan, M. Mykhailych, D Mishkin, and Jiri Matas, “DeblurGAN: blind motion deblurring using conditional adversarial networks,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 8183–8192, June 2018.
[22] K. Zhang, W. Luo, Y. Zhong, L. Ma, B. Stenger, W Liu, and H. Li, “Deblurring by realistic blurring,” in Proc. IEEE conference on Computer Vision and Pattern Recognition, pp. 2734-2743, June 2020.
[23] Z. Shen, W. Wang, X. Lu, J. Shen, H. Ling, T. Xu, and Ling Shao, “Human-aware motion deblurring,” in Proc. IEEE conference on International Conference on Computer Vision, pp. 5572–5581 Oct. 2019.
[24] J. Rim1, H. Lee, J. Won, and S. Cho, “Real-world blur dataset for learning and benchmarking deblurring algorithms.” in Proc. European Conference on Computer Vision, pp. 184-201, Aug. 2020.
[25] O. Kupyn, T. Martyniuk, J. Wu, and Z Wang, “DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better,” in Proc. IEEE conference on International Conference on Computer Vision, pp.8878-8887, Aug. 2019.
[26] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” in Proc. IEEE Transactions on Image Processing, Vol. 13, No. 4, pp.600-612, April 2004.
指導教授 唐之瑋(Chih-Wei Tang) 審核日期 2022-7-15
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明