Abstract:
Aiming at the cooperative flight scenarios of manned-unmanned swarm systems, a distributed event-triggered tracking control approach based on reinforcement learning is proposed. To achieve distributed tracking control of manned-unmanned swarm systems in real time, the approach is based on the background of future information-based air warfare, in which high-cost manned aircraft, such as load-based/ship-based fighters and early warning aircraft, are considered as navigators and high-speed low-cost unmanned aircraft as "loyal wingmen" are considered as followers. The design of a dynamic event-triggered reinforcement learning algorithm allows the unmanned combat unit to adaptively adjust the trigger threshold by only relying on local information, thereby reducing the frequency of manned-unmanned swarm communication transmission and enhancing swarm stealthy performance. Finally, the feasibility of the approach is verified by the numerical simulation.