中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86514
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42693649      Online Users : 1437
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/86514


    Title: 通過與模型無關的元學習進行少量疾病-疾病關聯提取;Few-shot Disease-Disease Association Extraction via Model-Agnostic Meta-Learning
    Authors: 廖莉庭;Liao, Li-Ting
    Contributors: 資訊工程學系在職專班
    Keywords: 元學習;小樣本學習;疾病關聯提取;權重損失函數;類別不平衡;Meta-learning;Few-shot learning;Disease-Disease Annotation Extraction;Weighted Loss Function;Class Imbalanced
    Date: 2021-10-27
    Issue Date: 2021-12-07 12:55:19 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 近年來,meta-learning在自然語言處理領域中被大量研究。小樣本學習對特殊領域標註資料不易取得的特性來說非常有幫助,因此我們試著用生醫的標註資料做meta-testing的實驗。在本文中,我們將利用小樣本DDAE(Disease-Disease Association Extraction, DDAE)的資料在結合meta-learning及預訓練模型BERT的模型上做meta-testing。由於小樣本DDAE有類別不平衡的情況,我們利用了類別權重的方式來調整損失函數,並探討當資料集中有不被關注的類別,例如無相關(null)、其他(others)等,且此類別的佔比在資料中又高時,我們使用一個超參數來調整權重,產生新的損失函數,名為Null-excluded weighted cross-entropy (NEWCE),解決不被關注且佔比大的類別問題,讓模型關注在重要類別上。我們展示了預訓練模型及meta-learning的結合優於直接微調預訓練模型,並且在面對小樣本類別不平衡時,如何調整權重。;In recent years, meta-learning has been extensively studied in the field of natural language processing. For the specific fields that are not easy to do the annotated, Few-shot learning is really helpful. Therefore, we attempt to use biomedical annotated data to do meta-testing experiments. In this article, we use the Few-shot DDAE (Disease-Disease Association Extraction, DDAE) data to do meta-testing on a model that combines meta-learning and pre-training model BERT. Due to the class imbalance issue in Few-shot DDAE, we use the class weighted to adjust the loss function. We also focus on those minor categories, like NULL or Others, etc. When those data occupy most of the proportion, we use a hyperparameter to adjust the weight and generate a new loss function called Null-excluded weighted cross-entropy (NEWCE) to solve the problem and let the model focus on major categories. We show that the combination of the pre-training model and meta-learning is better than directly fine-tuning the pre-training model, and how to adjust the weight from the imbalance of minor categories.
    Appears in Collections:[資訊工程學系碩士在職專班 ] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML122View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明