<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>

未知環境下無人機集群智能協同探索路徑規劃

Intelligent cooperative exploration path planning for UAV swarm in an unknown environment

  • 摘要: 隨著無人機執行任務復雜性與環境種類多樣性的不斷提高,多無人機集群系統逐漸得到國內外的廣泛關注,無人機路徑規劃成為當前研究熱點. 考慮到傳統路徑規劃算法一般需要先驗地圖信息,在搜索救援等環境未知場景中難以滿足,本文提出了一種基于強化學習的未知環境下的無人機集群協同探索路徑規劃方法. 首先,考慮無人機集群協同探索任務特點及動力學、避碰避障等約束條件,基于馬爾可夫決策過程,建立無人機集群協同探索博弈模型與評價準則. 其次,提出基于強化學習方法的無人機集群協同探索方法,建立基于策略?評判網絡的雙網絡架構,并利用隨機地圖增強探索方法面對未知環境的泛化能力. 每架無人機在探索過程中不斷收集地圖信息,并基于環境信息和個體間的共享信息調整自身策略,通過迭代訓練實現未知環境下的集群協同探索. 最后,基于Unity搭建無人機集群協同探索虛擬仿真平臺,并與非合作的單智能體算法進行對比試驗,驗證了本文所提算法在任務成功率、任務完成效率和回合獎勵等方面均具有優勢.

     

    Abstract: Owing to the increasing complexity of task execution and a wide range of variability in environmental conditions, a single unmanned aerial vehicle (UAV) is insufficient to meet practical mission requirements. Multi-UAV systems have vast potential for applications in areas such as search and rescue. During search and rescue missions, UAVs acquire the location of the target to be rescued and subsequently plan a path that circumvents obstacles and leads to the target. Traditional path-planning algorithms require prior knowledge of obstacle distribution on the map, which may be difficult to obtain in real-world missions. To address the issue of traditional path-planning algorithms that rely on prior map information, this paper proposes a reinforcement learning-based approach for the collaborative exploration of multiple UAVs in unknown environments. First, a Markov decision process is employed to establish a game model and task objectives for the UAV cluster, considering the characteristics of collaborative exploration tasks and various constraints of UAV clusters. To maximize the search and rescue success rate, UAVs must satisfy dynamic and obstacle-avoidance constraints during mission execution. Second, a reinforcement learning-based method for the collaborative exploration of multiple UAVs is proposed. The multiagent soft actor–critic (MASAC) algorithm is used to iteratively train the UAVs’ collaborative exploration strategies. The actor network generates UAV actions, while the critic network evaluates the quality of these strategies. To enhance the algorithm’s generalization capability, training is conducted in randomly generated map environments. To avoid UAVs being obstructed by concave obstacles, a breadth-first search algorithm is used to calculate rewards based on the path distance between the UAVs and targets rather than the linear distance. During the exploration process, each UAV continuously collects and shares the map information with all other UAVs. They make individual action decisions based on the environment and information obtained from other UAVs, and the mission is considered successful if multiple UAVs hover above the target. Finally, a virtual simulation platform for algorithm validation is developed using the Unity game engine. The proposed algorithm is implemented using PyTorch, and bidirectional interaction between the Unity environment and Python algorithm is achieved through the ML-Agents (Machine learning agents) framework. Comparative experiments are conducted on the virtual simulation platform to compare the proposed algorithm with a non-cooperative single-agent SAC algorithm. The proposed method exhibits advantages in terms of task success rate, task completion efficiency, and episode rewards, validating the feasibility and effectiveness of the proposed approach.

     

/

返回文章
返回
<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
www.77susu.com