Application of Neural Networks for Autonomous UAV Navigation in Urban Environments

Автор: Karabaev S., Omaralieva G., Abdumitalip uulu K.

Журнал: Бюллетень науки и практики @bulletennauki

Рубрика: Технические науки

Статья в выпуске: 6 т.11, 2025 года.

Бесплатный доступ

To ensure full autonomous navigation of unmanned aerial vehicles in complex urban environments, this paper proposes a methodology based on combining the perceptual capabilities of convolutional neural networks and the decision-making ability of reinforcement learning algorithms. The article outlines the system architecture, training procedures, the simulation environment used, and the evaluation metrics applied during the development and validation of the proposed approach. The architecture of the proposed system is conditionally divided into two main modules: the perception module and the decision-making module. The perception module is based on a deep CNN that processes raw visual data obtained from the front-facing cameras mounted on the UAV. The decision-making module uses a Deep Q-Network algorithm, which receives the obstacle map generated by the perception module. The output of the module consists of control signals—changes in pitch, roll, and yaw—necessary for trajectory correction depending on the current situation. The convolutional neural network model used in the perception module consists of five convolutional layers with ReLU activation and max-pooling operations, followed by two fully connected layers. The final output of the model is a binary obstacle map showing the presence and relative position of both static and dynamic objects. The model is trained on a labeled dataset that includes aerial photographs of urban environments with different obstacle configurations and lighting conditions. The DQN module interacts with the simulated environment, learning the optimal navigation strategy aimed at maximizing accumulated rewards. The reward function is designed to encourage forward movement along the route while penalizing proximity to obstacles and sharp trajectory changes. Simulation was conducted using the Microsoft AirSim platform, configured to replicate urban environment conditions, including tall buildings, narrow alleys, and moving objects. Virtual scenarios included both static and dynamic obstacles, varying lighting conditions, and GPS signal degradation. The DQN module was trained over many episodes, during which the UAV explored various routes and received feedback evaluating the safety and efficiency of its actions. The training of the system significantly improved its ability to adapt and make real-time decisions, which is confirmed by the results of testing in urban environments.

Еще

Unmanned aerial vehicles, artificial neural networks, urban environments, intelligent systems

Короткий адрес: https://sciup.org/14132793

IDR: 14132793   |   DOI: 10.33619/2414-2948/115/19

Статья научная