English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78818/78818 (100%)
造訪人次 : 35030737      線上人數 : 168
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89709


    題名: 基於DQN強化學習之自適應QUIC流量控制機制;Adaptive QUIC Flow Control Mechanism with Deep Q-Learning
    作者: 張佑祺;Chang, YuChi
    貢獻者: 通訊工程學系
    關鍵詞: 流量控制;flow control
    日期: 2022-09-12
    上傳時間: 2022-10-04 11:53:41 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著全球的網際網路流量以及物聯網裝置的增加,現代的網路應用對於延遲時間、遺失率、吞吐量提出了更高的要求。近年來,為了滿足網路應用的需求,發展出了一項新的傳輸層網路協定:QUIC,QUIC 結合了傳輸控制協定 (Transmission ControlProtocol, TCP) 和使用者資料報協定 (User Datagram Protocol, UDP) 的優勢,在大幅
    降低延遲的同時,保有高度的可靠性,但作為一個新協定,QUIC 在流量控制 (Flow Control, FC) 方面的研究尚未發展成熟,導致吞吐量以及延遲等效能指標受到顯著的限制。

    近期的研究以基於規則 (rule-based) 的原則來設計 QUIC 的流量控制機制,因此無法很好地適應廣泛的網路環境,並且無法在動態環境中調整其行為,而近年來有許多研究應用機器學習 (Machine Learning, ML) 來解決網路運營和管理中的各種問題,其中強化學習 (Reinforcement Learning, RL) 能夠在沒有先備知識的情況下,從經驗中學習如何與環境進行互動,並逐漸找到最佳的策略,因此能夠在變動的網路環境中,學習正確的流量控制策略,達到良好的傳輸性能。Deep Q-Learning (DQN) 作為其中一個常見的強化學習模型,能夠有效處理高維度狀態空間並且可以解決資料的關聯性問題,提升了演算法的穩定性。基於上述問題,本研究提出一套 QUIC 流量控制方法:FC-DQN,FC-DQN 透過 DQN 強化學習模型來提取端到端 (end-to-end) 的網路特徵,以此來選擇適當的流量控制視窗,使其能夠穩定、快速的學習最佳的流量控制策略。此外,由於 FC-DQN 能根據環境進行動態的規則控制,因此可以適應動態和各種不同的網路場景,實驗結果表明,FC-DQN 的性能優於傳統基於規則的流量控制方法,在保持低遺失率的同時,能夠降低封包的傳輸延遲。;With the increase in global internet traffic, modern network applications have higher requirements for latency, packet loss rates, and throughput. To meet the needs of network applications, a new transport layer network protocol called QUIC has been proposed. It combines the advantages of Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It significantly reduces the delay while maintaining a high degree of reliability. As a new protocol, the research on QUIC flow control has not yet matured, resulting in significant limitations on performance metrics such as delay and throughput.

    Recent research designs QUIC flow control mechanism based on rule-based principles. Therefore, it cannot adapt well to a wide range of network environments, and it cannot adjust its behavior in dynamic environments. In recent years, many studies applying machine learning to solve various problems in network operation and management. As a type of machine learning, reinforcement learning can learn how to interact with the environment without prior knowledge and gradually find the best policy. Thus, it can learn the correct flow control strategy and achieve better transmission performance in the dynamic network environment. Deep Q-Learning (DQN) is a common reinforcement learning model. It can effectively deal with high-dimensional state space and solve the problem of data correlation, which can make the algorithm more stable.

    In this paper, we propose a QUIC flow control mechanism called FC-DQN. It can select the appropriate flow control window from the end-to-end network characteristics with DQN reinforcement learning model. Since FC-DQN can accomplish dynamic rule control according to the environment, it can adapt to dynamic and various network sce narios. We show that FC-DQN outperforms the traditional rule-based QUIC flow control mechanisms, and can reduce delay and packet loss rate.
    顯示於類別:[通訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML92檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明