English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42687217      線上人數 : 1393
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95685


    題名: 於代理人制度下新增 LLM 投票單元提高生成程式碼正確性;Enhancing Code Generation Accuracy through the Addition of LLM Judging Units in a Multi-Agent System
    作者: 顏維新;Yen, Wei-Hsin
    貢獻者: 資訊工程學系
    關鍵詞: 大型語言模型;程式碼生成;ChatGPT;鍊式思考;多代理人制度;LLM 投票;LLM;Code Generation;ChatGPT;Chain-of-Thought;Multi- Agent Collaboration;LLM Judge
    日期: 2024-07-30
    上傳時間: 2024-10-09 17:09:11 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著大型語言模型 (LLM) 技術的進步,LLM 已成為程式開發時的重要輔助工具。然而,LLM 在程式碼生成方面的準確性和可靠性仍面臨諸多挑戰。本論文旨在深入分析現今 LLM 在程式碼生成中的正確性,探討其實際應用中的限制,並提出新的解決方案以提高生成程式碼的準確性。

    本論文提出了一種基於大型語言模型(LLM)的程式碼生成方法,名為JudgeCoder,採用了多代理人系統和鍊式思考(CoT)策略來增加程式碼生成的正確性。透過模擬小組開發程式碼的分工流程,分離了程式碼撰寫、測試資料撰寫及測試執行三件工作,減少了單一 LLM 模型因為分工不明確所可能導致的幻覺現象 (LLM Hallucination) 。並且提出了結合 CoT-SC (Chain of Thought with Self-Consistency) 想法的策略,進一步地針對因模型幻覺現象所產生的錯誤測試資料進行偵測,避免了因錯誤測試資料而導致進入錯誤修正流程的發生。在實驗中,JudgeCoder 展示了優良的性能,在HumanEval和HumanEval-ET的評估資料集上達到了最前沿的效能,說明了提案的投票機制搭配適當的提示策略和合理的錯誤判斷機制可以有效提升生成程式碼的準確性,這些結果不僅驗證了JudgeCoder的實用性,也為未來基於 LLM 的程式碼自動生成研究提供了一個應用的方向。;With the advancement of Large Language Models (LLMs), these models have become pivotal aids in software development. However, LLMs still face numerous challenges in terms of the accuracy and reliability of code generation. This paper aims to thoroughly analyze the correctness of current LLMs in code generation, explore their practical limitations, and propose solutions to enhance the accuracy of generated code.

    This paper introduces a code generation method based on LLMs, named JudgeCoder, which employs a multi-agent system and Chain of Thought (CoT) strategy to increase the correctness of code generation. By simulating the division of labor in team coding environments, the process separates code generation, test data generation, and test execution, thereby reducing the illusion phenomena often caused by unclear task division in a single LLM. Moreover, the paper presents a strategy combining Chain of Thought with Self-Consistency (CoT-SC), which further detects erroneous test data produced by model illusions, preventing the entry into incorrect correction processes. In experiments, JudgeCoder demonstrates good performance, achieving state-of-the-art results on the HumanEval and HumanEval-ET datasets. The results confirm that the proposed voting mechanism, coupled with appropriate prompting strategies and reasonable error judgment mechanisms, can effectively enhance the accuracy of generated code. These findings not only validate the practicality of JudgeCoder but also provide a directional framework for future research in LLM-based automatic code generation.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML55檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明