隨著車聯網 (vehicle-to-everything, V2X) 的技術發展,新一代 V2X 的系統架構整合了車對車 (vehicle-to-vehicle, V2V)、車對基礎設施 (vehicle-to-infrastructure, V2I)、 與車對行人 (vehicle-to-pedestrian, V2P),如何在路側設施 (roadside Unit)、自駕車端 (on-board unit) 以及後端服務器 (backend server) 間提供低功耗、低延遲、高可靠與安全的資料交換,是非常大的挑戰。在近期的研究中,增強式學習 (reinforcement learning, RL) 在車聯網的應用中取得卓越的進展,許多研究者也開始透過RL的方式解決資源分配的問題,而多代理人增強式學習 (MARL) 近期受到更多的關注,因為MARL的架構更能貼近我們使用者的環境,因此,在這篇論文中,我們也希望透過 MARL 的架構,探討如何在 V2X 中進行有效的資源分配,以最大化系統的吞吐量與頻寬效益,我們比較了多種知名的 RL 和 MARL 演算法,並使用真實模擬的道路資料來進行環境建置,除此之外,由於傳統的 MARL 專注在分散式的架構,因此,代理人只能透過自身的資訊來優化合作任務的策略,然而這種方式也常常掉入次佳解的情況,因此,我們提出一個嶄新的在車聯網中基於有限資訊分享下之多代理人頻寬分配方,使得整體系統的吞吐量能更進一步的提升。;Multi-agent reinforcement learning (MARL) in vehicular communication is a promising topic and attracted many researchers due to their ability to solve highly complex optimization problems. In this paper, to enhance the system throughput and spectrum efficiency, the vehicular agents can select different transmission modes, power, and sub-channels to maximize the overall system throughput in clusters. Since the agent takes action given its partial observation of the global state in conventional MARL structures, the efficiency of cooperative actions is thus degraded. In this work, we propose a novel MARL resource allocation algorithm for vehicular networks with information sharing. We extended the advantage actor-critic (A2C) to multi-agent A2C and using long short-term memory (LSTM) to estimate the global state given partial information. Moreover, a comprehensive comparison of landmark schemes is conducted on the realistic setup generated by Simulation of Urban MObility (SUMO). The result shows that the agent achieves favorable performance with the proposed scheme without full observability to the environment.