English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42672894      線上人數 : 1257
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95825


    題名: 針對深度偽造生成影像之對抗性擾動訊號嵌入策略;Effective Strategies of Adversarial Signal Embedding for Resisting Deepfakes Images
    作者: 張友安;Chang, Yu-An
    貢獻者: 資訊工程學系
    關鍵詞: 深度偽造;視覺感知模型;GAN;對抗性擾動;深度學習
    日期: 2024-08-19
    上傳時間: 2024-10-09 17:18:51 (UTC+8)
    出版者: 國立中央大學
    摘要: 利用生成模型進行深度偽造的技術日益進步且易於使用,可能的應用包括將輸入的人物影像合成符合某種需求如特定表情與外觀的輸出影像,或者是將影像轉換為不同的風格的畫面。此類應用同時也帶來不少潛在隱憂。大多數生成模型影像包含人臉,但其來源可能觸及敏感議題或未經畫面人物的授權使用,如何防範影像的不當使用是值得關注的議題。
    一種對人臉生成模型的反制方法是在影像中加入微小但不易察覺的擾動,藉此干預後續生成模型的運作。現存方法雖然讓加入擾動訊號的影像在生成模型的產出中產生內容破壞,但嵌入的擾動訊號卻容易造成影像明顯的失真,減少了實際運用的可行性。本研究提出結合視覺感知之最小可覺差(Just Noticeable Difference)與多種對抗性影像生成演算法的方式,產生與原圖更接近的擾動訊號嵌入影像,並探究不同的實作方式以確認對於生成模型的產出進行有效破壞。為了驗證擾動的適應性,我們亦測試反擾動攻擊,藉此比較對抗性擾動策略的優劣。實驗結果顯示,與現有方式限制最大像素值改變的方法相比,在保證對於目標生成模型的破壞效果下,我們基於最小可覺差的方法在影像品質的保持有更佳的表現。
    ;The technology for deepfakes using generative models is rapidly advancing and becoming increasingly accessible. Potential applications include synthesizing images of individuals that match specific requirements, such as certain expressions and appearances, or converting images into different styles. However, these applications also bring serious concerns. Most generative model outputs contain human faces, but their sources may involve sensitive issues or unauthorized use of individuals’ images. Preventing the misuse of such images is an important issue. One countermeasure against facial generative models is to introduce subtle but imperceptible perturbations into images to disrupt the subsequent operation of generative models. Existing methods, while causing content disruption in the outputs of generative models, often result in noticeable distortions in the images with embedded perturbations, reducing their practical usability. This study proposes a method that combines Just Noticeable Difference (JND) with various adversarial image generation strategies to produce perturbations that are closer to the original image. We also explore different implementation methods to ensure effective disruption of the generative model’s output. To validate the adaptability of the perturbations, we test against counter-perturbation attacks, comparing the effectiveness of different adversarial perturbation strategies. Experimental results show that, compared to existing methods that limit the maximum pixel value change, our JND-based approach provides better image quality preservation while ensuring effective disruption of the target generative model.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML46檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明