博碩士論文 105523055 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:13 、訪客IP:3.147.126.242
姓名 張博一(Po-Yi Chang)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 採用適應性支持權重方法之 光場影像深度估測
(Depth Estimation Using Adaptive Support-Weight Approach for Light Field Images)
相關論文
★ 應用於車內視訊之光線適應性視訊壓縮編碼器設計★ 以粒子濾波法為基礎之改良式頭部追蹤系統
★ 應用於空間與CGS可調性視訊編碼器之快速模式決策演算法★ 應用於人臉表情辨識之強健式主動外觀模型搜尋演算法
★ 結合Epipolar Geometry為基礎之視角間預測與快速畫面間預測方向決策之多視角視訊編碼★ 基於改良式可信度傳遞於同質區域之立體視覺匹配演算法
★ 以階層式Boosting演算法為基礎之棒球軌跡辨識★ 多視角視訊編碼之快速參考畫面方向決策
★ 以線上統計為基礎應用於CGS可調式編碼器之快速模式決策★ 適用於唇形辨識之改良式主動形狀模型匹配演算法
★ 以運動補償模型為基礎之移動式平台物件追蹤★ 基於匹配代價之非對稱式立體匹配遮蔽偵測
★ 以動量為基礎之快速多視角視訊編碼模式決策★ 應用於地點影像辨識之快速局部L-SVMs群體分類器
★ 以高品質合成視角為導向之快速深度視訊編碼模式決策★ 以運動補償模型為基礎之移動式相機多物件追蹤
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 光場相機(Light field camera)以透鏡陣列(microlens array)擷取多視角影像,由於多視角之間的基線較窄,因此在計算視差時需精確至小數位之視差值。適應性支持權重方法為一種局部性的視差估測方案,雖現有文獻曾使用適應性支持權重方法(adaptive support-weight approach)對光場影像進行視差估測,但未考慮估測視差之子像素精確度(sub-pixel accuracy)。
因此,本論文提出基於改進的適應性支持權重方法(adaptive support-weight approach)之光場影像深度估測方案。在計算視差前,先對輸入之各視角的光場影像進行雙立方內插(bicubic interpolation),再以適應性支持權重方法(adaptive support-weight approach)計算視差,以提高視差估測之精確度。於計算適應性支持權重之匹配代價時,本論文採用十字形視窗以降低計算複雜度(computation complexity),並動態調整十字形視窗之交錯位置,以估測影像邊界的視差值,此外,加重影像中變化激烈處之像素點之適應性支持權重,以提高視差估測之準確率。最後,結合所有微透鏡陣列(microlens array)之同一水平位置的各視角之估測視差的值,以提高視差估測之準確性。實驗結果顯示,本論文所提出之方案,相較於現有於極平面影像(epipolar plane image)基於適應性視窗之方案,於new HCI dataset之training子集,平均上可降低5.4%之視差估測之錯誤率(badpixel)。
摘要(英) Light field cameras acquire muti-view images using the microlens array. Due to the narrow baseline between multiple views, sub-pixel accuracy of the estimated disparity is expected. Adaptive support-weight approach is a local based disparity estimation method. Although several adaptive support-weight approach (ASW) based disparity estimation schemes for light field images have been proposed, they did not consider the problem of sub-pixel accuracy.
Therefore, this thesis proposes to improve adaptive support-weight approach based depth estimation for light field images. Before disparity estimation, bicubic interpolation is applied to light field images for sub-pixel accuracy. Then the adaptive support-weight approach estimates disparities, where the cross window is adopted to reduce computation complexity. The intersection position of the vertical and horizontal arms is dynamically adjusted on the image border. Then, we increase the weights of pixels which have higher edge response. Finally, the estimated disparities from multiple view featuring with the same horizontal position are combined to generate the disparity map of the central view. Our experimental results show that the average error rate of the proposed method is lower than that of the EPI based adaptive window matching approach for 5.4%.
關鍵字(中) ★ 光場影像
★ 視差估測
★ 適應性支持權重方法
★ 子像素精確度
★ 十字型視窗
關鍵字(英) ★ light field images
★ disparity estimation
★ adaptive support-weight
★ sub-pixel accuracy
★ cross window
論文目次 摘要……………………………………….………………………….…..…….………....I
Abstract……………………………………….………………………….…..…….….....II
目錄…………………………………………….………………………….…..…….….III
圖目錄…………………………………………………………………………….……..V
表目錄…………………………………………………………………….………….…VI
第一章 緒論……………………………………………………………….……….….…1
1.1前言………………………………………...………………………..……..……1
1.2研究動機………………………………………………………….……..………1
1.3研究方法…………………………………………………………….…..………2
1.4論文大綱………………………………………………………………...………3
第二章 多視角之立體視覺……………………………………………………..…….…4
2.1立體視覺原理簡介……………………………………………..……….….…...4
2.2立體視覺匹配 (Stereo Matching for Stereo Vision)…..…………..……............5
2.2.1匹配代價估算 (Matching Cost Computation)………..…........................5
2.2.2代價合併 (Cost Aggregation)…………………………….......................6
2.2.3視差計算與最佳化 (Disparity Computation and Optimization)...............6
2.2.4視差改進 (Disparity Refinement)..…………………….…......................7
2.3總結……………………….…………………………..........................................7
第三章 光場影像之視差估測………………………………….………………..….…...7
3.1光場影像 (Light Field Images)簡介………………………….….….….....……8
3.2光場影像之視差估測 (Disparity Estimation)……….…………..…………...…8
3.2.1基於Epipolar Plane Image (EPI)之視差估測.…………..……….………9
3.2.2基於對應技術 (Correspondence Techniques)之視差估測…………....11
3.3總結…………………………………………………….....................................13
第四章 基於匹配代價之光場影像深度估測 (Depth Estimation using Adaptive Support-Weight Approach for Light Field Images)…………………………..…..…..…14
4.1系統架構………………………………………………………………….……14
4.2雙立方內插 (Bicubic Interpolation)…………………………………..………15
4.3本論文所提之改進的適應性支持權重方法 (Improved Adaptive Support-weight Approach)………………….………………………………………………16
4.4基於光場影像特性之視差改進………………………………………………18
4.5總結…………………………………………………………............................18
第五章 實驗結果與討論………………………………………………………….…...20
5.1實驗參數與測試資料集規格………………………………..………………..20
5.1.1測試資料集與比較方法………………………………….……….…...20
5.1.2參數設定………………………………………………….……….…...24
5.2視差估測實驗結果………………..…………………………………………..24
5.3總結…………………………………………………........................................29
第六章 結論與未來展望……………………………………………............................30
參考文獻…………………………………………………………………….…………….…31
參考文獻 [1] A. Gershun, “The Light Field,” J. Math. Phys., Vol. 18, pp. 51–151, Apr. 1939.
[2] E. H. Adelson and J. R. Bergen, “The Plenoptic Function and The Elements of Early Vision,” Comput. Models Visual Process. MIT Press, pp. 3–20, Aug. 1991.
[3] M. Levoy and P. Hanrahan, “Light Field Rendering,” in Proc. 23rd Annu. Conf. Comput. Graph. Interactive Techn. pp. 31–42, Aug. 1996.
[4] S. Richard, G. Steven, G. Radek, and C. Michael F, “The Lumigraph,” in Proc. 23rd Annu. Conf. Comput. Graph. Interactive Techn, pp. 43–54, Aug. 1996.
[5] K. Yoon and I. Kweon, “Adaptive Support-weight Approach for Correspondence Search”, in Proceedings of IEEE Trans. Circuits and Systems for Video Technology, Vol. 28, No.4, pp. 650-656, May 2006.
[6] K. Zhang, J. Lu, and G. Lafruit , “Cross-Based Local Stereo Matching Using Orthogonal Integral Images,” in Proceedings of IEEE Trans. Circuits and Systems for Video Technology, Vol. 19, No. 7, pp. 1073-1079, July 2009.
[7] J. Kowalczuk, E. T. Psota, and L. C. Perez “Light Field Assisted Stereo Matching using Depth from Focus and Image-Guided Cost-Volume Filtering,” in Proceedings of IEEE International Conference on Image Processing, Vol. 2, pp. 1114-1119, July 2012.
[8] K. T. Ng and Member, “A Multi-Camera Approach to Image-Based Rendering and 3-D/Multiview Display of Ancient Chinese Artifacts,” in Proceedings of IEEE Transactions on Multimedia, Vol. 14, No. 6, pp. 1631-1641, Dec. 2012.
[9] W. S. Russell, “Polynomial Interpretation Schemes for Internal Derivative Distribution on Structured Grids,’’ Appl. Numer. Math, Vol. 17, pp. 129-171, May 1995.
[10] K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields,” Asian Conference on Computer Vision. Lecture Notes in Computer Science, Vol. 10113. pp. 19-34, Springer, Cham, Mar. 2016.
[11] P. H. Lin, J. S. Yeh , F. C. Wu, and Y. Y. Chuang, “Depth Estimation for Lytro Images by Adaptive Window Matching on EPI,” J. Imaging, Vol. 3, Issue 2, 17 , May 2017.
[12] D. Scharstein and R. Szeliski “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms,” in Proceedings of IEEE Workshop Stereo and Multi-Baseline Vision, Dec. 2001.
[13] Q. Yang, L. Wang, R. Yang, S. Wang, M. Liao1, and D Nist’er, “Real-time Global Stereo Matching Using Hierarchical Belief Propagation,” British Machine Vision Conference, pp.989-998, Jan. 2006.
[14] R. Bellman, “Dynamic Programming,” Dover Books on Computer Science, Reprint Edition, Mar. 2003.
[15] Y. Boykov, O. Veksler, and R. Zabih , “Fast Approximate Energy Minimization via Graph Cuts,” in Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23 Issue 11, pp. 1222-1239, Nov. 2001.
[16] J. Sun and N.-N. Zheng, “Stereo Matching Using Belief Propagation,” in Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, Issue: 7, July 2003.
[17] C.-K. Liang, C.-C. Cheng, and Y. C. Lai, “Hardware-Efficient Belief Propagation,” in Proceedings of IEEE Transactions on Circuits and Systems for Video Technology, pp. 80-87, June 2009.
[18] R. Nagpal and A. Bhowmik “Light Field Denoising and Upsampling Using Variations of the 4D Bilateral Filter,” Taiwan Online Programming Contest, Vol. 38 Issue 6-7, pp. 329-341, June 2012.
[19] C. Perwass and P. Wietzke. “Single Lens 3D-camera with Extended Depth-of-field,” International Society for Optics and Photonics Elect. Imaging, Feb. 2012.
[20] S. Wanner and B. Goldluecke. “Globally Consistent Depth Labeling of 4D Light Fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp.41-48, June 2012.
[21] R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane Image Analysis: An Approach to Determining Structure from Motion,” International Journal of Computer Vision, Vol. 1, Issue 1, pp. 7–55, Mar. 1987.
[22] S. Wanner and B. Goldluecke, “Reconstructing Reflective and Transparent Surfaces from Epipolar Plane Images, Heidelberg Collaboratory for Image Processing,” Lecture Notes in Computer Science, Vol. 8142, 2017.
[23] M.A. Nuno-Maganda, M.O. Arias-Estrada, “Real-Time FPGA-Based Architecture for Bicubic Interpolation: An Application for Digital Image Scaling,” International Conference on Reconfigurable Computing and FPGAs, Sep. 2005.
[24] http://www.neoimaging.cn
[25] T.-C. Wang, A.-A. Efros, R. Ramamoorthi, and Senior Member, “Depth Estimation with Occlusion Modeling Using Light-Field Cameras,” in Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 2170-2181, Jan. 2016.
[26] M. Strecke, A. Alperovich, and B. Goldluecke, “Accurate Depth and Normal Maps from Occlusion-Aware Focal Stack Symmetry,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2529-2537, July 2017.
[27] S. Borman, and R.L. Stevenson, “Spatial Resolution Enhancement of Low-resolution Image Sequences. A Comprehensive Review with Directions for Future Research Lab,” Image and Signal Analysis Tech. Rep., University of Notre Dame, 1998.
[28] G. H. Schut, "Review of Interpolation Methods for Digital Terrain Models," Canadian Surveyor, Vol. 30, No. 5, pp. 389-412, Dec. 1976.
[29] D. Kidner, M. Dorey, “What′s The Point? Interpolation and Extrapolation with a Regular Grid DEM,” Division of Mathematics & Computing, 2001.
[30] C. Tomas, R. Manduchi, “Bilateral Filtering for Gray and Color Images,” in Proceedings of Sixth International Conference on Computer Vision, Jan. 1998.
[31] YouTube: Lytro Camera Light Field Technology, 2011 [Video file]. Retrieved from https://www.youtube.com/watch?v=xNJZHFZEkYQ.
[32] H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth Recovery from Light Field Using Focal Stack Symmetry,” in Proceedings of IEEE Conference on Computer Vision. pp. 3451–3459, Feb. 2015.
[33] E. Adelson and J. Wang. “Single Lens Stereo with a Plenoptic Camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, Issue: 2, Feb. 1992.
[34] G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, and Senior Member,“Light Field Image Processing: An Overvie,” IEEE Journal of Selected Topics in Signal Processing, Vol. 11, No. 7, pp.926-954, Oct. 2017.
指導教授 唐之瑋(Chih-Wei Tang) 審核日期 2018-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明