中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86826
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42715262      Online Users : 1450
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/86826


    Title: 評估中文摘要之事實一致性並探討斷詞對其之影響;Does the Tokenization Influence the Faithfulness? Evaluation of Hallucinations for Chinese Abstractive Summarization
    Authors: 李正倫;Li, Zheng-Lun
    Contributors: 資訊工程學系
    Keywords: 自動萃取式摘要;預訓練模型;中文斷詞;事實一致性;Abstractive Summarization;Pre?trained Model;Tokenization;Hallucination
    Date: 2021-09-30
    Issue Date: 2021-12-07 13:17:07 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 事實一致性問題是自動萃取式摘要中關鍵且棘手的問題,近年來受
    到許多研究者的關注,然而先前之研究集中於探討英文摘要中的事實
    一致性問題,中文摘要的事實一致性則尚被評估與研究。
    我們基於中文相對於英文較為不同的部分進行研究,也就是斷詞,
    現今的中文預訓練模型大多使用和 BERT 相同的斷詞系統,實際上相
    當接近單純使用字元進行斷詞。
    透過使用不同中文斷詞套件來訓練中文 BART 模型,並在 LCSTS
    中文摘要資料集上微調,我們證實了斷詞不只影響傳統 ROUGE 分數
    也同時影響了事實一致性。
    此外考慮到簡體和繁體中文的用詞差異,我們也建立了台灣新聞弱
    監督自動萃取式摘要資料集 TWNSum ,透過最簡單的 LEAD 方式抽
    取摘要並使用事實一致性評估篩選,表明從大量未標記的新聞語料中
    生成自動萃取式摘要資料集是可行的。;Hallucination is a critical and hard problem in abstractive summarization,
    getting increasing attention in recent years. However, hallucination in some
    languages, or specifically, in Chinese, is still unexplored. We experiment with
    a special procedure in the Chinese modeling, which is tokenization, to figure
    out the effect of tokenization on hallucinations in abstractive summarization.
    Tokenization is not often taken out for additional experimented in English
    due to the language characteristics. In the Chinese scenario, current models
    use either the character?level tokenization or the tokenization similar to the
    character?level tokenization, such as the BERT tokenizer. By applying dif ferent Chinese tokenizers to the BART model, we confirm that the tokenizer
    will affect both the ROUGE score and the faithfulness of the model. More over, considering the difference between the traditional Chinese and simpli fied Chinese tokenizers, we create Taiwan Weakly supervised News Sum marization dataset (TWNSum) by using the simple LEAD method and the
    hallucination evaluation filtering. Additionally, our TWNSum dataset shows
    that creating an abstractive summarization dataset from a large amount of
    unlabeled news by a weakly supervised method is feasible.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML98View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明