Abstract:
Fire often occurs in the indoor environment with complex structure and strong sealing, which is easy to damage the building structure. During fire rescue, accurate positioning and status monitoring of firefighters, target recognition and map construction of search and rescue environment, adaptive navigation of firefighters in dynamic scenes, and efficient information interaction and collaborative control are the current focus and difficulties. To solve the above problems, multi-source information such as inertia, vision, GPS, UWB and laser, etc. is integrated, multi-source information fusion positioning algorithm based on factor map, visual navigation adaptive method in dynamic scene, semantic map construction algorithm based on laser and vision combination, and big data analysis algorithm based on cloud platform are studied, and a fire rescue oriented firefighter autonomous navigation and search and rescue system is designed. The system can implement accurate positioning, motion analysis and status monitoring for rescue personnel, and can reconstruct the 3D scene of the search and rescue environment to further realize adaptive visual navigation, and information interaction and collaborative control between rescue personnel and commanders during the search and rescue process, so as to achieve multi-party cooperative operations, effective deployment, scientific rescue and to improve search and rescue efficiency and maximize the safety of rescue personnel. The experimental results show that the system combines the local position and attitude optimization of inertia and vision to restrain the divergence of navigation errors and to overcome the adverse effects of dynamic scenes and maintain high positioning accuracy in complex fire scenes, realize the scene adaptation, and improve the performance of the navigation system.