Geospatial Detection and Movement Analysis System for Unmanned Aerial Vehicles Based on Computer Vision Methods

Автор: Iryna Yurchuk, Danyil-Mykola Obertan

Журнал: International Journal of Information Technology and Computer Science @ijitcs

Статья в выпуске: 4 Vol. 17, 2025 года.

Бесплатный доступ

The rapid proliferation of Unmanned Aerial Vehicles (UAVs) across military, commercial, and civilian domains creates unprecedented security challenges while simultaneously offering significant operational advantages. Current detection and tracking systems face mounting pressure to balance effectiveness with deployment complexity and cost constraints. This paper presents a geospatial detection and movement analysis system for Unmanned Aerial Vehicles that addresses critical security challenges through innovative mathematical and software solutions. The research introduces a methodology for UAV monitoring that minimizes sensor requirements, utilizing a single optical sensor equipped with distance measurement capabilities. The core of this work focuses on developing and evaluating an algorithm for three-dimensional (3D) coordinate determination and trajectory prediction without requiring direct altitude measurement. The proposed approach integrates computer vision detection results with a mathematical model that defines spatial relationships between camera parameters and detected objects. Specifically, the algorithm estimates altitude parameters and calculates probable flight trajectories by analyzing the correlation between apparent size variation and measured distance changes across continuous detections. The system implements a complete analytical pipeline, including continuous detection processing, geospatial coordinate transformation, trajectory vector calculation, and visualization on geographic interfaces. Its modular architecture supports real-time analysis of video streams, representing detected trajectories as vector projections with associated uncertainty metrics. The algorithm's capability to provide reliable trajectory predictions is demonstrated through validation in synthetically generated environments. It offers a cost-effective monitoring solution for small aerial objects across diverse environmental conditions. This research contributes to the development of minimally-instrumented UAV tracking systems applicable in both civilian and defense scenarios.

Еще

UAV Detection, Geospatial Analysis, Trajectory Prediction, Computer Vision, Single-camera Tracking, Height Estimation Algorithm, Mathematical Modeling, Automated Systems, Software Architecture

Короткий адрес: https://sciup.org/15019929

IDR: 15019929   |   DOI: 10.5815/ijitcs.2025.04.02

Текст научной статьи Geospatial Detection and Movement Analysis System for Unmanned Aerial Vehicles Based on Computer Vision Methods

The diverse and expanding applications of Unmanned Aerial Vehicles (UAVs) across various sectors, including professional aerial surveying, commercial filmmaking, search and rescue operations, demining, and scientific research, also introduce significant security challenges such as unauthorized airspace entry and potential malicious use [1]. Consequently, robust detection, tracking, and analysis methodologies are critically important. Traditional aerial object monitoring systems, such as radar and radio frequency (RF) surveillance [2], face inherent limitations when applied to small, diverse UAVs, often struggling with low radar cross-sections, ground clutter, or ineffective tracking of autonomous platforms. While multi-sensor fusion offers improved accuracy, its complexity, cost, and deployment constraints limit widespread implementation [3]. This has directed research attention towards optical detection systems, which identify UAVs based on visual characteristics regardless of their emission profiles.

Despite the effectiveness of computer vision in detecting UAVs from video streams [4,5], a significant research gap persists in accurately translating these two-dimensional (2D) detections into actionable three-dimensional (3D) trajectory information, particularly with minimal sensor requirements. Current approaches often necessitate expensive multi-sensor arrays or direct altitude measurements, which are not always feasible. The critical challenge lies in estimating precise UAV height from 2D imagery and subsequently predicting its future flight path with incomplete spatial data.

This paper presents a geospatial detection and movement analysis system for UAVs using minimal sensor requirements. The object of research is the process of UAV detection and trajectory prediction using single-camera optical systems with distance measurement capabilities. The subject of research encompasses mathematical algorithms for three-dimensional coordinate determination and modular software architecture for automated UAV monitoring systems.

The aim of this research is to develop a cost-effective methodology for accurate three-dimensional UAV trajectory prediction using minimal sensor infrastructure, eliminating the need for expensive multi-sensor arrays or direct altitude measurement systems. The main tasks include:

  •    Developing mathematical frameworks for 3D trajectory prediction from sequential 2D detections.

  •    Creating height estimation algorithms through dynamic size-distance analysis.

  •    Implementing modular system architecture integrating detection, tracking, and visualization components.

  •    Validation through synthetic testing environments.

  • 2.    Related Works 2.1.    UAV Detection Methods
  • 2.2.    3D Localization and Height Estimation

  • 2.3.    Trajectory Prediction Techniques

The core contribution is a mathematical framework for 3D UAV trajectory prediction without direct height measurement, achieved through correlation analysis of apparent size variation and measured distance changes across sequential detections. The modular system architecture integrates computer vision detection, geospatial analysis, and visualization functionalities, supporting various detection models and comprehensive validation through computergenerated three-dimensional modeling environments with photorealistic rendering capabilities.

Historically, radar systems have been used for air surveillance. However, they often struggle with small UAVs due to their tiny radar signature and interference from ground clutter at low altitudes. Radio frequency (RF) monitoring can identify UAVs by checking their control signals. But this method does not work for autonomous drones that fly without constant remote control. Acoustic detection uses the unique sounds of UAV engines. While it can work well in quiet areas, its performance drops significantly in noisy city environments [6].

Recently, optical detection using computer vision has become more popular [7]. It can spot UAVs based on how they look, no matter what signals they emit. Though effective, its performance can be affected by bad weather or poor lighting. Many applications now use hybrid multi-sensor systems that combine cameras, thermal sensors, and acoustic sensors for better accuracy. However, these systems are often complex, costly, and hard to set up.

Finding the exact three-dimensional (3D) position of a detected UAV is a difficult task. Multi-camera systems offer high accuracy by using triangulation, but they need careful setup and add to the system's complexity [8,9].

Single-camera approaches are simpler, but they face challenges, especially when estimating a UAV's height [10]. Some approaches combine a single camera with motion sensors to estimate position and direction, but these often need to know the UAV's exact size beforehand [11]. Basic single-camera methods that use perspective geometry for distance can have limited accuracy, particularly for height, beyond short distances.

Estimating height is particularly hard for systems with just one camera. Direct methods like LiDAR or radar provide accurate height, but they make the system more complex and expensive. Other methods try to guess height from different measurements. For example, size-distance relationship methods link a UAV's visible size in an image to its measured distance. However, their accuracy often depends on knowing the UAV's true dimensions [12]. Machine learning can also estimate height from visual features, showing good results but needing a lot of specific training data.

Methods for predicting a UAV's path range from simple straight-line projections to more complex statistical models. Kalman filtering is frequently used because of its efficiency in short-term path prediction. However, studies have shown that its performance degrades significantly during abrupt maneuvers or over extended time periods, as it relies on simplified motion assumptions [13]. To address these limitations, some advanced methods model the underlying flight dynamics of the aircraft, using aerodynamic principles to predict non-linear paths with greater physical realism [14]. A common problem with many existing prediction algorithms is that they assume a constant height or need direct height measurements. This limits their use in systems with minimal equipment.

  • 2.4.    Used Approach in Context

  • 2.5.    Mathematical Model Justification

  • 3.    System Overview and Components
  • 3.1.    Simulation System and Estimation Indexes

This review identifies persistent challenges in developing accurate and cost-effective UAV tracking systems with minimal sensor requirements. A significant gap exists in reliable 3D path reconstruction, particularly altitude determination, using only a single optical sensor. The research addresses this deficiency through the development of a mathematical framework for 3D trajectory prediction utilizing sequential detections from a single camera combined with distance measurements. The method determines altitude by analyzing the correlation between apparent size variation and measured distance changes across continuous detections, eliminating the requirement for direct height measurement or prior knowledge of UAV dimensions. This approach represents an optimal balance between performance, cost-efficiency, and deployment simplicity for various UAV monitoring applications.

Existing UAV trajectory prediction methods face significant limitations for single-sensor applications. Kalman filtering requires direct altitude measurements or constant height assumptions, limiting accuracy during vertical maneuvers. Particle filters and deep learning approaches demand substantial computational resources and prior knowledge of flight dynamics. Multi-camera triangulation achieves high accuracy but requires complex infrastructure with multiple synchronized cameras.

The proposed mathematical framework addresses these limitations through size-distance correlation analysis. When a UAV maintains constant altitude, the ratio of apparent sizes in sequential detections should equal the inverse ratio of measured distances. Deviations from this relationship indicate altitude changes, enabling dynamic height estimation without prior knowledge of UAV dimensions or direct altitude measurement.

This approach offers key advantages: minimal hardware requirements (single camera plus distance measurement), low computational cost suitable for real-time processing, simple deployment without complex calibration, and geometric interpretability enabling comprehensive error analysis. The method optimizes for scenarios requiring costeffective monitoring with acceptable accuracy, making it practical for diverse UAV surveillance applications where infrastructure constraints limit multi-sensor deployments.

The proposed UAV detection and trajectory prediction system operates on a modular architecture to efficiently convert video streams into actionable trajectory predictions, requiring minimal sensors.

The system comprises four main, linked modules (Fig. 1): (1) Data Acquisition, (2) Detection Processing, (3) Trajectory Analysis, and (4) Visualization and Alerting.

Fig.1. System architecture diagram showing the four main modules and data flow

The system is designed with modularity as a guiding principle, allowing individual components to be updated or replaced without affecting the overall architecture. This approach facilitates both experimental evaluation and practical deployment adaptations.

  • 3.2.    Data Acquisition and Camera Management

  • 3.3.    UAV Detection and Tracking

  • 3.4.    Trajectory Analysis Module

  • 3.5.    Visualization and Alerting

The data acquisition module manages video streams and metadata from camera sources registered with geographical positioning (latitude, longitude, height), orientation parameters (heading, tilt), and optical characteristics

(field of view, resolution). The module supports two processing pipelines: Detection Pipeline extracts frames at configurable intervals (1-5 fps) for processing, while Visualization Pipeline provides lower-resolution HLS feeds for operator monitoring. Distance measurement integration attaches measured values to frame metadata, with potential support for ToF cameras, laser rangefinders via standardized API, manual entry, or algorithmic estimation through UAV bounding box tracking across sequential frames.

The Detection Processing module identifies UAVs using computer vision despite challenges from small size, varying appearance, and environmental factors. Modern deep learning architectures like You Only Look Once (YOLO) have proven highly effective for this initial detection step, with recent implementations such as YOLOv7 demonstrating high precision and recall specifically for drone identification from video streams [15]. Building on this, our system evaluates several detection backbones, including YOLO-based models, DETR for its robustness in cluttered scenes, and the computationally efficient Roboflow Fast. After comparative analysis, DETR was selected for this implementation due to its superior performance in complex backgrounds despite higher computational requirements. Models are trained on datasets including synthetic imagery, real-world footage, and augmented data. The modular architecture allows easy substitution of detection methods based on specific deployment requirements and available computational resources.

This module is the core of the system, implementing the geospatial trajectory prediction algorithm (detailed in Section 4). It transforms 2D detections into 3D trajectory predictions by preprocessing data, calculating angular coordinates, determining 3D positions using distance, analyzing size-distance relationships for height estimation, computing velocity vectors, and transforming coordinates to a global geographical system.

Fig.2. UAV trajectory prediction algorithm flowchart

The Visualization and Alerting module provide the user interface. Its visualization component displays trajectory information on a map, showing camera locations, current UAV positions (with confidence), historical flight paths, and predicted future trajectories. The alerting component generates notifications based on configurable rules, such as perimeter violations or anomalous flight patterns. Alerts are shown visually and can integrate with external systems. The web-based implementation allows access via standard browsers, simplifying deployment and supporting tracking, statistics visualization, and alerting.

4.    Geospatial Trajectory Prediction Algorithm 4.1.    Foundational Principles and Assumptions

  • A.    Core Geometric Concept

The algorithm uses the relationship between a stationary camera and a moving UAV in 3D space to transform sequential 2D detections into 3D trajectory predictions. UAV detections provide pixel coordinates, which are converted to angles relative to the camera's orientation and combined with measured linear distance to localize the object. However, this information alone does not uniquely determine UAV altitude, as multiple points along the same line of sight can share identical pixel coordinates at varying distances and heights.

To overcome this limitation, the algorithm normalizes pixel coordinates and calculates apparent object sizes, then applies dynamic analysis by comparing expected versus actual size changes between detections to infer altitude variations. Since a UAV maintaining constant height should show size changes inversely proportional to distance, deviations indicate vertical movement. Using the two calculated 3D positions, the algorithm computes velocity vectors and project’s future locations assuming linear motion, then converts coordinates to latitude/longitude for map visualization, with validation checks ensuring realistic height bounds and physically plausible trajectories.

  • B.    Operating Assumptions

The algorithm relies on several key values for its operation (Fig. 3):

  •    The camera's geographical position ( Фс> ^с» ^с ), orientation ( ®h‘ ^t ), and optical parameters ( FOVh,FOVv,W X H ) are known.

  •    Reliable distance measurements from camera to UAV are available for each detection.

  •    UAVs maintain relatively consistent flight characteristics (e.g., speed and direction) between detections (over short intervals), allowing linear movement approximation.

  •    The general environment type (e.g., "urban" or "open") is known to set altitude limits.

  • 4.2.    Input Data and Preprocessing

Fig.3. Conceptual diagram showing Camera-UAV trajectory geometry

The algorithm processes two main types of input: detailed detection results and a set of system parameters related to the camera and the environment.

  • A.    Data Requirements

For each pair of sequential UAV detections (referred to as Detection 1 and Detection 2), the following data points are necessary:

  •    Detection Data: Pixel coordinates of object center (x, y), bounding box dimensions in pixels (w, h), confidence score, class (“UAV” or “drone”), unique identifier.

  •    Temporal Information: Time difference between detections: Δt (in seconds)

  •    Measured Distances: Distance from camera to UAV for each detection: d1, d2 (in meters)

Additionally, the system requires the following parameters:

Camera Parameters: Latitude Фс , Longitude Лс , Height above ground level hc , Heading (azimuth) ^h , Tilt angle 9t , Horizontal field of view POVh and Vertical field of view FOVv, Image resolutionwxh(width and height in pixels).

Environment Type: Categorized as "urban" or "open."

UAV Size Estimates: Typical actual sizes for different classes.

  • B.    Data Normalization

Detection coordinates are normalized, and the pixel size of the bounding box is computed. Normalized horizontal and vertical pixel coordinates:

de!

W

Pixel size of the bounding box, representing its diagonal length:

  • 4.3.    Spatial Localization of the First Detection (Position 1)

  • A.    Angular Coordinate Transformation

Normalized pixel coordinates are transformed into azimuth (α) and elevation (ε) angles.

Azimuth angle (horizontal offset from camera heading):

a=e+(c -0.5)-FOV                               (4)

Elevation angle (vertical offset from camera tilt):

b 01 (0.5 c ) FOV^                                           (5)

  • B.    Initial Height Estimation (Position 1)

The initial height of the UAV is estimated using the derived elevation angle and the corresponding measured distane ( d ).

Elevation angle converted to radians:

£ , nid

F|’ 180

Raw height estimation relative to camera height ( hc ):

Clamped height:

h -h+d- sin(f , raw с I rad'

  • — max^min^h  ,h },h

UAV             raw max min

  • C. Local Cartesian Coordinate Derivation (Position 1)

With the UAV's height determined, conversion into a origin is available.

Height difference between UAV and camera:

local Cartesian coordinate system where the camera is at the

Ah =

h -li

UAV

Ground distance ( dg ):

Azimuth angle converted to radians for Cartesian projection:

я

a ,=a •--- rad i 180

The local Cartesian coordinates (x, y, z) of the UAV for the first detection are:

— d • sin{a у — d • cos(a „ g rad          g         rad

UAV

  • 4.4.    Dynamic Height Estimation and Localization of the Second Detection (Position 2)

  • A.    Size-Distance Relationship Analysis (Discrepancy Factor)

The algorithm utilizes the inverse relationship between an object's perceived size and distance. If a UAV flies at a constant altitude with a fixed orientation, its apparent pixel size changes inversely with distance. Any change from this pattern suggests a change in the UAV's height or orientation.

Size ratio between the second ( spx,2 ) and first ( spx,l ) detections is calculated:

Distance ratio between the first ( d j) and second ( d^ ) detections is:

These two ratios are then combined to compute the discrepancy factor (δ):

6= Г T

s d

An ideal scenario where a UAV maintains constant height and orientation would result in a discrepancy factor δ ≈ 1. A value of δ >1 suggests that the UAV is either descending or its orientation has changed to present a larger profile to the camera. Conversely, δ<1 indicates that the UAV is likely ascending or presenting a smaller profile.

  • B.    Dynamic Height Refinement (Position 2)

The discrepancy factor refines height estimation for the second UAV position by combining geometric calculations with a dynamic adjustment from the observed size-distance relationship.

Elevation angle for the second detection (ε2) converted to radians:

n

£      = £ • ---- rad,2      2 180

Raw height estimation for the second position:

The estimated height is then adjusted:

h + h raw,2

;ж1’0.5‘(й-1),(/|й-1|>0.1

h , otherwise raw ,2

Refined UAV height for the second detection:

— max(min{h ,h ),h . )

adj max min

This height refinement enhances the accuracy of vertical position determination for UAVs during altitude changes, without needing direct height measurements.

  • C.    Local Cartesian Coordinate Derivation (Position 2)

  • 4.5.    Trajectory Calculation and Future Prediction

Using the refined height huAV.2 for the 2nd detection, the ground distance ( ^Q 2 ) and local Cartesian coordinates ( Хг.Уг^г ) are calculated using the same geometric principles as in Sections 4.3.2 and 4.3.3, based on ^2 and ^UAV.2 .

  • A.    Velocity Vector Determination

The velocity components ( Vx,Vy,Vz ) are computed from the change in the UAV's 3D positions (( Х1,У1,21 ) to ( *2-У2^2 )) over the measured time difference (Δt)

Overall speed:

Horizontal speed:

  • B.    Movement Direction and Climb Angle

The UAV's movement is defined by a horizontal compass bearing (β) and a vertical climb angle. The compass bearing (0-360°, with 0° as North) is calculated using the arctan2 function to account for all quadrants:

/ X 180

fi = arctan2(v ,v )•----mod 360                                  (23)

The vertical climb angle (γ), indicating ascent (positive) or descent (negative), is calculated using the vertical velocity (Vz) and horizontal speed ( Vh ):

If Vh is zero, γ will be 90° for ascent or -90° for descent.

  • C.    Future Position Prediction

Future positions ( Pfut ) are predicted by projecting the calculated velocity vector from the current UAV position ( P2 = С^2.У2^2) over a specified prediction time interval ( ^pred , usually about 10 seconds).

Predicted future position:

Pfut ^2 + V ^red                                             (25)

  • D.    Geographic Coordinate Transformation

To integrate and visualize systems on geographical maps, Cartesian coordinates for current and predicted positions are converted into standard geographical coordinates (latitude, longitude, altitude), considering the Earth's curvature and using its approximate radius (RE):

Л£, = 6378137 m                                         (26)

Conversion factors ( кф for latitude and for longitude) convert meters to degrees. The longitude factor varies with the camera's latitude ( Фс ) due to the Earth's spherical shape:

Current ( Ф2>^2 ) and predicted ( Ф/utf ^fut ) geographical coordinates:

$2 ^с + \ ^

 \ 2 ^2                              (28)

^fut ^c^^fui кф'^/щ ^C+Xfui к *                               (29)

  • 4.6.    Algorithm Adaptations and Operational Modes

  • 4.7.    Assumptions and Their Implications

  • 5.    Results

To enhance flexibility and applicability, the algorithm supports several adaptations designed to address diverse operational requirements. It features configurable height estimation modes (constant or variable), environment-specific parameters for altitude ranges (urban/open), and the option to incorporate UAV class size estimates for plausibility checks. The environment-specific parameters enable the system to adjust altitude constraints based on typical flight patterns and regulatory limitations characteristic of different operational contexts. These adjustments allow the system to be optimized for various scenarios while maintaining the core mathematical principles.

The algorithm operates under several key assumptions that directly affect system accuracy and reliability. Camera parameters (position, orientation, field of view) are assumed known and stable, with inaccuracies causing systematic errors in position calculations. Distance measurements are assumed accurate, with errors propagating through height estimation and trajectory prediction. UAV flight characteristics are assumed relatively consistent between detections for linear motion approximation - this fails during rapid maneuvers, reducing prediction accuracy.

Environmental and detection assumptions include known terrain type for altitude constraints, reliable UAV identification with consistent bounding box accuracy, and appropriate time intervals between detections. Too short intervals reduce measurement precision, while too long intervals violate linear motion assumptions. These assumptions define operational boundaries where the system performs reliably, with height estimation being most sensitive to assumption violations due to its indirect inference methodology.

The algorithm's performance evaluation considers multiple input parameters including camera positioning, UAV detection accuracy, environmental conditions, and measurement precision to assess the system's trajectory prediction capabilities. Synthetic environments provide controlled conditions essential for algorithm validation, enabling precise ground truth comparison and systematic evaluation of mathematical framework accuracy. The synthetic testing incorporates diverse atmospheric conditions and illumination scenarios to simulate various environmental situations, while avoiding complex regulatory constraints for UAV operations.

This approach allows comprehensive assessment of the size-distance correlation methodology under known parameters, establishing baseline performance metrics before advancing to field trials. Considering the geometric complexity of different test scenarios and varying measurement methodologies, the analysis focuses on key performance indicators including distance estimation accuracy, heading determination precision, and detection confidence levels. The comprehensive evaluation yields quantitative metrics that demonstrate the algorithm's effectiveness across diverse operational configurations while highlighting the impact of geometric positioning constraints on overall system performance.

Accuracy Analysis Results

Experimental validation across two test configurations demonstrated varying accuracy levels. Test 1 (Table 1) achieved traveled distance accuracy of 97.77% and heading accuracy of 99.71% with detection confidence of 77.6%. Test 2 (Table 2) showed traveled distance accuracy of 82.3% and heading accuracy of 95.8% with detection confidence of 71.7%. The trajectory analysis results for both configurations are visualized in Figure 4, with Test 1 results shown on the left and Test 2 results on the right.

Heading estimation demonstrated consistently high accuracy across both configurations, while distance tracking performance varied significantly between test scenarios. Performance variations were primarily attributed to geometric positioning complexity and measurement methodology differences.

Table 1. Drone detection system performance metrics test 1

Parameter

Actual

Detected

Error

Accuracy (%)

Distance (meters)

100

102.23

2.23

97.77

Heading (°)

270

269.48

0.52

99.71

Speed (m/s)

50.00

51.12

1.12

97.76

Detection confidence

N/A

77.6

N/A

77.6

Table 2. Drone detection system performance metrics test 2

Parameter

Actual

Detected

Error

Accuracy (%)

Distance (meters)

100

82.33

17.67

82.3

Heading (°)

180

172.43

7.57

95.8

Speed (m/s)

50.00

41.24

8.76

82.3

Detection confidence

N/A

71.7%

N/A

71.7

Position 1                            Position 2                Position 1                       Position 2

Fig.4. Drone trajectory analysis results showing two test cases with detected UAVs at sequential positions and corresponding 3D trajectory visualizations on satellite maps

6.    Conclusions

This research presents a mathematical framework for UAV trajectory prediction using a single optical sensor with distance measurement capabilities. The core innovation is a dynamic analysis algorithm that estimates threedimensional coordinates by correlating apparent size variation with measured distance changes, eliminating direct altitude measurement requirements.

Experimental validation demonstrated traveled distance accuracy ranging from 82.3% to 97.77% and heading estimation of 95.8%-99.71%. The algorithm showed consistent directional tracking effectiveness while exhibiting sensitivity to geometric positioning complexity. The modular system architecture successfully integrated detection, tracking, and visualization components with real-time capabilities.

Key limitations include performance degradation under challenging geometric configurations and dependency on measurement reliability. Height estimation remains the most challenging aspect due to indirect inference methodology. Future research directions include multi-camera integration, advanced filtering techniques, and machine learning approaches for improved height estimation.

This work contributes to minimally-instrumented UAV tracking systems applicable in civilian security and defense contexts, providing a cost-effective alternative to complex multi-sensor approaches while maintaining operational effectiveness for comprehensive situational awareness requirements.

Статья научная