中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86747
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 42716947      在线人数 : 1541
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86747


    题名: 基於預訓練模型與再評分機制之開放領域中文問答系統;Open Domain Chinese Question Answering System based on Pre-training Model and Retrieval Reranking
    作者: 陳大富;Chen, Ta-Fu
    贡献者: 資訊工程學系
    关键词: 問答系統;開放領域;開放領域問答系統;檢索再評分;預訓練;Question Answering System;Open-domain;Open-domain Question Answering System;Retrieval reranking;Pre-training
    日期: 2021-08-24
    上传时间: 2021-12-07 13:10:40 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來在自然語言處理領域的研究,皆漸漸的轉往使用大型預訓練語言模型,在開放領域的問答系統也不例外。大型預訓練語言模型為問答系統帶來了強大的理解能力與答案抽取能力,但隨之而來的是其龐大參數量,所帶來的緩慢推理速度,再加上實際應用時模型需要處理的內容數量不固定而導致體驗不佳的問題。本論文提出一個中文開放領域問答系統其中加入Reranking的機制,對於要進入問答模型的文章改以段落為單位並進行語意層面的篩選,不但可提供傳統檢索器所缺乏的語意資訊外,更可以藉此有效的減少並控制進入問答模型的段落數量,以達到降低問答模型被誤導的可能性,並大幅提升系統給出答案的反應速度。
    開放領域的問答中其問答範圍是不設限在特定領域的,所以在實際應用時勢必會遇到許多訓練時不曾見過的樣本,因此問答模型必須具備有非常良好的泛化能力,才能有較佳的表現。並且在使用問答系統時使用者提出的問題時常會帶有口語化的人類習慣,這樣的特性與訓練資料集中,相較之下較為規矩的問句格式有些差異。因此本論文提出了一套用於中文問答的方法,包括對訓練資料的處理與訓練時的方式。對於資料的處理,目標在於利用現有的資料集進行調整與組合等,以提高接受問題類型的能力。對於訓練的方式,目標在於利用調整訓練時的樣本長度等,以提高模型對不同長度的適應性。藉由上述方法可提升模型的泛化能力,並使其對於口語化的問答有較良好的接受度,進而提升模型在給予答案時的精確度。
    ;In recent years, research in natural language processing has gradually shifted to the use of large-scale pre-trained language models. The open-domain question answering system is no exception. Large-scale pre-trained language models bring powerful understanding and answer extraction capabilities to the question answering system. But what follows is the slow inferencing speed brought by its huge amount of parameters. Coupled with the fact that the amount of content that the model needs to deal with is not fixed in actual application, it leads to the problem of poor experience. This paper proposes a Chinese open-domain question answering system which incorporates Reranking mechanism. For articles that want to enter the question answering (Q&A) model, change them to paragraphs and screen them at the semantic level. Not only can provide semantic information that traditional document retrievers lack, but also can effectively reduce and control the number of paragraphs entering the Q&A model. In order to reduce the possibility of the Q&A model being misled, and greatly improve the response speed of the system to give answers.
    The scope of question in the open-domain question answering system is not limited to a specific domain. Hence, in actual application, it will inevitably encounter many samples that have not been seen during training. Therefore, the question answering model must have a very good generalization ability in order to have better performance. And when using the question and answer system, the questions asked by the user often have a colloquial human habit. This feature is somewhat different from the more regular question format in the training data set. Therefore, this paper proposes a set of methods for Chinese question answering, including the processing of training data and the way of training. The goal of training data processing is to use existing data sets for adjustments and combinations, etc., to improve the ability to accept problem types. And the goal of the training method is to adjust the sample length during training to improve the adaptability to different types of question. Through the above methods, the generalization ability of the model can be improved, and it has a better acceptance of the colloquial question. And then improve the accuracy of the model when giving answers.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML73检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明